00:00:00.001 Started by upstream project "autotest-per-patch" build number 132393 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.091 Fetching changes from the remote Git repository 00:00:00.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.610 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.623 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.637 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.637 > git config core.sparsecheckout # timeout=10 00:00:05.648 > git read-tree -mu HEAD # timeout=10 00:00:05.664 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.686 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.687 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.773 [Pipeline] Start of Pipeline 00:00:05.787 [Pipeline] library 00:00:05.789 Loading library shm_lib@master 00:00:05.789 Library shm_lib@master is cached. Copying from home. 00:00:05.804 [Pipeline] node 00:00:20.805 Still waiting to schedule task 00:00:20.806 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:24.516 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:03:24.518 [Pipeline] { 00:03:24.535 [Pipeline] catchError 00:03:24.537 [Pipeline] { 00:03:24.549 [Pipeline] wrap 00:03:24.561 [Pipeline] { 00:03:24.571 [Pipeline] stage 00:03:24.572 [Pipeline] { (Prologue) 00:03:24.591 [Pipeline] echo 00:03:24.592 Node: VM-host-SM38 00:03:24.597 [Pipeline] cleanWs 00:03:24.605 [WS-CLEANUP] Deleting project workspace... 00:03:24.605 [WS-CLEANUP] Deferred wipeout is used... 00:03:24.611 [WS-CLEANUP] done 00:03:24.813 [Pipeline] setCustomBuildProperty 00:03:24.900 [Pipeline] httpRequest 00:03:25.371 [Pipeline] echo 00:03:25.373 Sorcerer 10.211.164.20 is alive 00:03:25.382 [Pipeline] retry 00:03:25.384 [Pipeline] { 00:03:25.399 [Pipeline] httpRequest 00:03:25.403 HttpMethod: GET 00:03:25.404 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:25.405 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:25.410 Response Code: HTTP/1.1 200 OK 00:03:25.411 Success: Status code 200 is in the accepted range: 200,404 00:03:25.411 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:39.220 [Pipeline] } 00:03:39.238 [Pipeline] // retry 00:03:39.246 [Pipeline] sh 00:03:39.561 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:39.576 [Pipeline] httpRequest 00:03:39.981 [Pipeline] echo 00:03:39.983 Sorcerer 10.211.164.20 is alive 00:03:39.994 [Pipeline] retry 00:03:39.996 [Pipeline] { 00:03:40.011 [Pipeline] httpRequest 00:03:40.016 HttpMethod: GET 00:03:40.017 URL: http://10.211.164.20/packages/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:03:40.018 Sending request to url: http://10.211.164.20/packages/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:03:40.046 Response Code: HTTP/1.1 200 OK 00:03:40.054 Success: Status code 200 is in the accepted range: 200,404 00:03:40.055 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:04:59.522 [Pipeline] } 00:04:59.540 [Pipeline] // retry 00:04:59.548 [Pipeline] sh 00:04:59.828 + tar --no-same-owner -xf spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:05:03.115 [Pipeline] sh 00:05:03.392 + git -C spdk log --oneline -n5 00:05:03.393 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:05:03.393 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:05:03.393 349af566b nvmf: Get metadata config by not bdev but bdev_desc 00:05:03.393 1981e6eec bdevperf: Add hide_metadata option 00:05:03.393 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:05:03.410 [Pipeline] writeFile 00:05:03.424 [Pipeline] sh 00:05:03.703 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:03.713 [Pipeline] sh 00:05:03.988 + cat autorun-spdk.conf 00:05:03.988 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:03.988 SPDK_TEST_NVME=1 00:05:03.988 SPDK_TEST_FTL=1 00:05:03.988 SPDK_TEST_ISAL=1 00:05:03.988 SPDK_RUN_ASAN=1 00:05:03.988 SPDK_RUN_UBSAN=1 00:05:03.988 SPDK_TEST_XNVME=1 00:05:03.988 SPDK_TEST_NVME_FDP=1 00:05:03.988 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:03.994 RUN_NIGHTLY=0 00:05:03.995 [Pipeline] } 00:05:04.007 [Pipeline] // stage 00:05:04.020 [Pipeline] stage 00:05:04.023 [Pipeline] { (Run VM) 00:05:04.035 [Pipeline] sh 00:05:04.309 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:04.309 + echo 'Start stage prepare_nvme.sh' 00:05:04.309 Start stage prepare_nvme.sh 00:05:04.309 + [[ -n 2 ]] 00:05:04.309 + disk_prefix=ex2 00:05:04.309 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:05:04.309 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:05:04.309 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:05:04.309 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:04.309 ++ SPDK_TEST_NVME=1 00:05:04.309 ++ SPDK_TEST_FTL=1 00:05:04.309 ++ SPDK_TEST_ISAL=1 00:05:04.309 ++ SPDK_RUN_ASAN=1 00:05:04.309 ++ SPDK_RUN_UBSAN=1 00:05:04.309 ++ SPDK_TEST_XNVME=1 00:05:04.309 ++ SPDK_TEST_NVME_FDP=1 00:05:04.309 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:04.309 ++ RUN_NIGHTLY=0 00:05:04.309 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:05:04.309 + nvme_files=() 00:05:04.309 + declare -A nvme_files 00:05:04.309 + backend_dir=/var/lib/libvirt/images/backends 00:05:04.309 + nvme_files['nvme.img']=5G 00:05:04.309 + nvme_files['nvme-cmb.img']=5G 00:05:04.309 + nvme_files['nvme-multi0.img']=4G 00:05:04.309 + nvme_files['nvme-multi1.img']=4G 00:05:04.309 + nvme_files['nvme-multi2.img']=4G 00:05:04.309 + nvme_files['nvme-openstack.img']=8G 00:05:04.309 + nvme_files['nvme-zns.img']=5G 00:05:04.309 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:04.309 + (( SPDK_TEST_FTL == 1 )) 00:05:04.309 + nvme_files["nvme-ftl.img"]=6G 00:05:04.309 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:04.309 + nvme_files["nvme-fdp.img"]=1G 00:05:04.309 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:04.309 + for nvme in "${!nvme_files[@]}" 00:05:04.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:05:04.309 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:04.309 + for nvme in "${!nvme_files[@]}" 00:05:04.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:05:04.309 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:04.309 + for nvme in "${!nvme_files[@]}" 00:05:04.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:05:04.309 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:04.309 + for nvme in "${!nvme_files[@]}" 00:05:04.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:05:04.309 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:04.309 + for nvme in "${!nvme_files[@]}" 00:05:04.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:05:04.872 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:04.872 + for nvme in "${!nvme_files[@]}" 00:05:04.872 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:05:04.872 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:04.872 + for nvme in "${!nvme_files[@]}" 00:05:04.872 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:05:04.872 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:04.872 + for nvme in "${!nvme_files[@]}" 00:05:04.872 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:05:04.872 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:04.872 + for nvme in "${!nvme_files[@]}" 00:05:04.872 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:05:05.130 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:05.130 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:05:05.387 + echo 'End stage prepare_nvme.sh' 00:05:05.387 End stage prepare_nvme.sh 00:05:05.398 [Pipeline] sh 00:05:05.678 + DISTRO=fedora39 00:05:05.678 + CPUS=10 00:05:05.678 + RAM=12288 00:05:05.678 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:05.678 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:05.678 00:05:05.678 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:05:05.678 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:05:05.678 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:05:05.678 HELP=0 00:05:05.678 DRY_RUN=0 00:05:05.678 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:05:05.678 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:05.678 NVME_AUTO_CREATE=0 00:05:05.678 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:05:05.678 NVME_CMB=,,,, 00:05:05.678 NVME_PMR=,,,, 00:05:05.678 NVME_ZNS=,,,, 00:05:05.678 NVME_MS=true,,,, 00:05:05.678 NVME_FDP=,,,on, 00:05:05.678 SPDK_VAGRANT_DISTRO=fedora39 00:05:05.678 SPDK_VAGRANT_VMCPU=10 00:05:05.678 SPDK_VAGRANT_VMRAM=12288 00:05:05.678 SPDK_VAGRANT_PROVIDER=libvirt 00:05:05.678 SPDK_VAGRANT_HTTP_PROXY= 00:05:05.678 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:05.678 SPDK_OPENSTACK_NETWORK=0 00:05:05.678 VAGRANT_PACKAGE_BOX=0 00:05:05.678 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:05:05.678 FORCE_DISTRO=true 00:05:05.678 VAGRANT_BOX_VERSION= 00:05:05.678 EXTRA_VAGRANTFILES= 00:05:05.678 NIC_MODEL=e1000 00:05:05.678 00:05:05.678 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:05:05.678 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:05:08.205 Bringing machine 'default' up with 'libvirt' provider... 00:05:08.463 ==> default: Creating image (snapshot of base box volume). 00:05:08.463 ==> default: Creating domain with the following settings... 00:05:08.463 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109107_67fa06d5b0037a9a65a8 00:05:08.463 ==> default: -- Domain type: kvm 00:05:08.463 ==> default: -- Cpus: 10 00:05:08.463 ==> default: -- Feature: acpi 00:05:08.463 ==> default: -- Feature: apic 00:05:08.463 ==> default: -- Feature: pae 00:05:08.463 ==> default: -- Memory: 12288M 00:05:08.463 ==> default: -- Memory Backing: hugepages: 00:05:08.463 ==> default: -- Management MAC: 00:05:08.463 ==> default: -- Loader: 00:05:08.463 ==> default: -- Nvram: 00:05:08.463 ==> default: -- Base box: spdk/fedora39 00:05:08.463 ==> default: -- Storage pool: default 00:05:08.463 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109107_67fa06d5b0037a9a65a8.img (20G) 00:05:08.463 ==> default: -- Volume Cache: default 00:05:08.463 ==> default: -- Kernel: 00:05:08.463 ==> default: -- Initrd: 00:05:08.463 ==> default: -- Graphics Type: vnc 00:05:08.463 ==> default: -- Graphics Port: -1 00:05:08.463 ==> default: -- Graphics IP: 127.0.0.1 00:05:08.463 ==> default: -- Graphics Password: Not defined 00:05:08.463 ==> default: -- Video Type: cirrus 00:05:08.463 ==> default: -- Video VRAM: 9216 00:05:08.463 ==> default: -- Sound Type: 00:05:08.463 ==> default: -- Keymap: en-us 00:05:08.463 ==> default: -- TPM Path: 00:05:08.463 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:08.463 ==> default: -- Command line args: 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:08.463 ==> default: -> value=-drive, 00:05:08.463 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:08.463 ==> default: -> value=-drive, 00:05:08.463 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:08.463 ==> default: -> value=-drive, 00:05:08.463 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:08.463 ==> default: -> value=-device, 00:05:08.463 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:08.463 ==> default: -> value=-drive, 00:05:08.464 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:08.464 ==> default: -> value=-device, 00:05:08.464 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:08.464 ==> default: -> value=-drive, 00:05:08.464 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:08.464 ==> default: -> value=-device, 00:05:08.464 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:08.464 ==> default: -> value=-device, 00:05:08.464 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:08.464 ==> default: -> value=-device, 00:05:08.464 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:08.464 ==> default: -> value=-drive, 00:05:08.464 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:08.464 ==> default: -> value=-device, 00:05:08.464 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:08.722 ==> default: Creating shared folders metadata... 00:05:08.722 ==> default: Starting domain. 00:05:09.655 ==> default: Waiting for domain to get an IP address... 00:05:24.564 ==> default: Waiting for SSH to become available... 00:05:24.564 ==> default: Configuring and enabling network interfaces... 00:05:27.890 default: SSH address: 192.168.121.5:22 00:05:27.890 default: SSH username: vagrant 00:05:27.890 default: SSH auth method: private key 00:05:29.261 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:35.883 ==> default: Mounting SSHFS shared folder... 00:05:37.254 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:37.254 ==> default: Checking Mount.. 00:05:38.186 ==> default: Folder Successfully Mounted! 00:05:38.186 00:05:38.186 SUCCESS! 00:05:38.186 00:05:38.186 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:38.186 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:38.186 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:38.186 00:05:38.194 [Pipeline] } 00:05:38.209 [Pipeline] // stage 00:05:38.218 [Pipeline] dir 00:05:38.219 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:05:38.220 [Pipeline] { 00:05:38.231 [Pipeline] catchError 00:05:38.233 [Pipeline] { 00:05:38.245 [Pipeline] sh 00:05:38.521 + vagrant ssh-config --host vagrant 00:05:38.521 + sed -ne '/^Host/,$p' 00:05:38.521 + tee ssh_conf 00:05:41.043 Host vagrant 00:05:41.043 HostName 192.168.121.5 00:05:41.043 User vagrant 00:05:41.043 Port 22 00:05:41.043 UserKnownHostsFile /dev/null 00:05:41.043 StrictHostKeyChecking no 00:05:41.043 PasswordAuthentication no 00:05:41.043 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:41.043 IdentitiesOnly yes 00:05:41.043 LogLevel FATAL 00:05:41.043 ForwardAgent yes 00:05:41.043 ForwardX11 yes 00:05:41.043 00:05:41.058 [Pipeline] withEnv 00:05:41.061 [Pipeline] { 00:05:41.076 [Pipeline] sh 00:05:41.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:05:41.415 source /etc/os-release 00:05:41.415 [[ -e /image.version ]] && img=$(< /image.version) 00:05:41.415 # Minimal, systemd-like check. 00:05:41.415 if [[ -e /.dockerenv ]]; then 00:05:41.415 # Clear garbage from the node'\''s name: 00:05:41.415 # agt-er_autotest_547-896 -> autotest_547-896 00:05:41.415 # $HOSTNAME is the actual container id 00:05:41.415 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:41.415 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:41.415 # We can assume this is a mount from a host where container is running, 00:05:41.415 # so fetch its hostname to easily identify the target swarm worker. 00:05:41.415 container="$(< /etc/hostname) ($agent)" 00:05:41.415 else 00:05:41.415 # Fallback 00:05:41.415 container=$agent 00:05:41.415 fi 00:05:41.415 fi 00:05:41.415 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:41.415 ' 00:05:41.441 [Pipeline] } 00:05:41.455 [Pipeline] // withEnv 00:05:41.463 [Pipeline] setCustomBuildProperty 00:05:41.474 [Pipeline] stage 00:05:41.476 [Pipeline] { (Tests) 00:05:41.493 [Pipeline] sh 00:05:41.769 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:41.785 [Pipeline] sh 00:05:42.065 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:42.080 [Pipeline] timeout 00:05:42.080 Timeout set to expire in 50 min 00:05:42.083 [Pipeline] { 00:05:42.100 [Pipeline] sh 00:05:42.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:05:42.942 HEAD is now at 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:05:42.953 [Pipeline] sh 00:05:43.230 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:05:43.499 [Pipeline] sh 00:05:43.779 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:43.795 [Pipeline] sh 00:05:44.074 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:05:44.074 ++ readlink -f spdk_repo 00:05:44.074 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:44.074 + [[ -n /home/vagrant/spdk_repo ]] 00:05:44.074 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:44.074 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:44.074 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:44.074 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:44.074 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:44.074 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:44.074 + cd /home/vagrant/spdk_repo 00:05:44.074 + source /etc/os-release 00:05:44.074 ++ NAME='Fedora Linux' 00:05:44.074 ++ VERSION='39 (Cloud Edition)' 00:05:44.074 ++ ID=fedora 00:05:44.074 ++ VERSION_ID=39 00:05:44.074 ++ VERSION_CODENAME= 00:05:44.074 ++ PLATFORM_ID=platform:f39 00:05:44.074 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:44.074 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:44.074 ++ LOGO=fedora-logo-icon 00:05:44.074 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:44.074 ++ HOME_URL=https://fedoraproject.org/ 00:05:44.074 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:44.074 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:44.074 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:44.074 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:44.074 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:44.074 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:44.074 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:44.074 ++ SUPPORT_END=2024-11-12 00:05:44.074 ++ VARIANT='Cloud Edition' 00:05:44.074 ++ VARIANT_ID=cloud 00:05:44.074 + uname -a 00:05:44.333 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:44.333 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:44.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.851 Hugepages 00:05:44.851 node hugesize free / total 00:05:44.851 node0 1048576kB 0 / 0 00:05:44.851 node0 2048kB 0 / 0 00:05:44.851 00:05:44.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.851 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:44.851 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:44.851 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:44.851 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:44.851 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:44.851 + rm -f /tmp/spdk-ld-path 00:05:44.851 + source autorun-spdk.conf 00:05:44.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:44.851 ++ SPDK_TEST_NVME=1 00:05:44.851 ++ SPDK_TEST_FTL=1 00:05:44.851 ++ SPDK_TEST_ISAL=1 00:05:44.851 ++ SPDK_RUN_ASAN=1 00:05:44.851 ++ SPDK_RUN_UBSAN=1 00:05:44.851 ++ SPDK_TEST_XNVME=1 00:05:44.851 ++ SPDK_TEST_NVME_FDP=1 00:05:44.851 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:44.851 ++ RUN_NIGHTLY=0 00:05:44.851 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:44.851 + [[ -n '' ]] 00:05:44.851 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:44.851 + for M in /var/spdk/build-*-manifest.txt 00:05:44.851 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:44.851 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:44.851 + for M in /var/spdk/build-*-manifest.txt 00:05:44.851 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:44.851 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:44.851 + for M in /var/spdk/build-*-manifest.txt 00:05:44.851 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:44.851 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:44.851 ++ uname 00:05:44.851 + [[ Linux == \L\i\n\u\x ]] 00:05:44.851 + sudo dmesg -T 00:05:44.851 + sudo dmesg --clear 00:05:44.851 + dmesg_pid=5032 00:05:44.851 + [[ Fedora Linux == FreeBSD ]] 00:05:44.851 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.851 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.851 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:44.851 + [[ -x /usr/src/fio-static/fio ]] 00:05:44.851 + sudo dmesg -Tw 00:05:44.851 + export FIO_BIN=/usr/src/fio-static/fio 00:05:44.851 + FIO_BIN=/usr/src/fio-static/fio 00:05:44.851 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:44.851 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:44.851 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:44.851 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.851 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.851 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:44.851 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.851 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.851 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:45.109 13:25:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:45.109 13:25:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:45.109 13:25:44 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:45.109 13:25:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:45.109 13:25:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:45.109 13:25:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:45.109 13:25:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.109 13:25:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:45.109 13:25:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:45.109 13:25:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.109 13:25:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.109 13:25:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.109 13:25:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.109 13:25:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.109 13:25:44 -- paths/export.sh@5 -- $ export PATH 00:05:45.109 13:25:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.109 13:25:44 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:45.109 13:25:44 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:45.109 13:25:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109144.XXXXXX 00:05:45.109 13:25:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109144.HPpdZ9 00:05:45.109 13:25:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:45.109 13:25:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:45.110 13:25:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:45.110 13:25:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:45.110 13:25:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:45.110 13:25:44 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:45.110 13:25:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:45.110 13:25:44 -- common/autotest_common.sh@10 -- $ set +x 00:05:45.110 13:25:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:45.110 13:25:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:45.110 13:25:44 -- pm/common@17 -- $ local monitor 00:05:45.110 13:25:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:45.110 13:25:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:45.110 13:25:44 -- pm/common@25 -- $ sleep 1 00:05:45.110 13:25:44 -- pm/common@21 -- $ date +%s 00:05:45.110 13:25:44 -- pm/common@21 -- $ date +%s 00:05:45.110 13:25:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109144 00:05:45.110 13:25:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109144 00:05:45.110 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109144_collect-vmstat.pm.log 00:05:45.110 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109144_collect-cpu-load.pm.log 00:05:46.043 13:25:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:46.043 13:25:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:46.043 13:25:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:46.043 13:25:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:46.043 13:25:45 -- spdk/autobuild.sh@16 -- $ date -u 00:05:46.043 Wed Nov 20 01:25:45 PM UTC 2024 00:05:46.043 13:25:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:46.044 v25.01-pre-242-g82b85d9ca 00:05:46.044 13:25:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:46.044 13:25:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:46.044 13:25:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:46.044 13:25:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:46.044 13:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:05:46.044 ************************************ 00:05:46.044 START TEST asan 00:05:46.044 ************************************ 00:05:46.044 using asan 00:05:46.044 13:25:45 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:46.044 00:05:46.044 real 0m0.000s 00:05:46.044 user 0m0.000s 00:05:46.044 sys 0m0.000s 00:05:46.044 13:25:45 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:46.044 13:25:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:46.044 ************************************ 00:05:46.044 END TEST asan 00:05:46.044 ************************************ 00:05:46.044 13:25:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:46.044 13:25:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:46.044 13:25:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:46.044 13:25:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:46.044 13:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:05:46.044 ************************************ 00:05:46.044 START TEST ubsan 00:05:46.044 ************************************ 00:05:46.044 using ubsan 00:05:46.044 13:25:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:46.044 00:05:46.044 real 0m0.000s 00:05:46.044 user 0m0.000s 00:05:46.044 sys 0m0.000s 00:05:46.044 ************************************ 00:05:46.044 END TEST ubsan 00:05:46.044 13:25:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:46.044 13:25:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:46.044 ************************************ 00:05:46.044 13:25:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:46.044 13:25:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:46.044 13:25:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:46.044 13:25:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:46.301 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:46.301 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:46.559 Using 'verbs' RDMA provider 00:05:57.458 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:07.420 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:07.936 Creating mk/config.mk...done. 00:06:07.936 Creating mk/cc.flags.mk...done. 00:06:07.936 Type 'make' to build. 00:06:07.936 13:26:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:07.936 13:26:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:07.936 13:26:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:07.936 13:26:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:07.936 ************************************ 00:06:07.936 START TEST make 00:06:07.936 ************************************ 00:06:07.936 13:26:07 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:08.193 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:08.193 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:08.193 meson setup builddir \ 00:06:08.193 -Dwith-libaio=enabled \ 00:06:08.193 -Dwith-liburing=enabled \ 00:06:08.193 -Dwith-libvfn=disabled \ 00:06:08.193 -Dwith-spdk=disabled \ 00:06:08.193 -Dexamples=false \ 00:06:08.193 -Dtests=false \ 00:06:08.193 -Dtools=false && \ 00:06:08.193 meson compile -C builddir && \ 00:06:08.193 cd -) 00:06:08.193 make[1]: Nothing to be done for 'all'. 00:06:10.091 The Meson build system 00:06:10.091 Version: 1.5.0 00:06:10.091 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:10.091 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:10.091 Build type: native build 00:06:10.091 Project name: xnvme 00:06:10.091 Project version: 0.7.5 00:06:10.091 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:10.091 C linker for the host machine: cc ld.bfd 2.40-14 00:06:10.091 Host machine cpu family: x86_64 00:06:10.091 Host machine cpu: x86_64 00:06:10.091 Message: host_machine.system: linux 00:06:10.091 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:10.091 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:10.091 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:10.091 Run-time dependency threads found: YES 00:06:10.091 Has header "setupapi.h" : NO 00:06:10.091 Has header "linux/blkzoned.h" : YES 00:06:10.091 Has header "linux/blkzoned.h" : YES (cached) 00:06:10.091 Has header "libaio.h" : YES 00:06:10.091 Library aio found: YES 00:06:10.091 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:10.091 Run-time dependency liburing found: YES 2.2 00:06:10.091 Dependency libvfn skipped: feature with-libvfn disabled 00:06:10.091 Found CMake: /usr/bin/cmake (3.27.7) 00:06:10.091 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:10.091 Subproject spdk : skipped: feature with-spdk disabled 00:06:10.091 Run-time dependency appleframeworks found: NO (tried framework) 00:06:10.091 Run-time dependency appleframeworks found: NO (tried framework) 00:06:10.091 Library rt found: YES 00:06:10.091 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:10.091 Configuring xnvme_config.h using configuration 00:06:10.091 Configuring xnvme.spec using configuration 00:06:10.091 Run-time dependency bash-completion found: YES 2.11 00:06:10.091 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:10.091 Program cp found: YES (/usr/bin/cp) 00:06:10.091 Build targets in project: 3 00:06:10.091 00:06:10.091 xnvme 0.7.5 00:06:10.091 00:06:10.091 Subprojects 00:06:10.091 spdk : NO Feature 'with-spdk' disabled 00:06:10.091 00:06:10.091 User defined options 00:06:10.091 examples : false 00:06:10.091 tests : false 00:06:10.091 tools : false 00:06:10.091 with-libaio : enabled 00:06:10.091 with-liburing: enabled 00:06:10.091 with-libvfn : disabled 00:06:10.091 with-spdk : disabled 00:06:10.091 00:06:10.091 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:10.348 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:10.348 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:10.606 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:10.606 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:10.606 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:10.606 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:10.606 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:10.606 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:10.606 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:10.606 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:10.606 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:10.606 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:10.606 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:10.606 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:10.606 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:10.606 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:10.606 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:10.606 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:10.606 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:10.606 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:10.606 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:10.606 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:10.606 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:10.606 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:10.606 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:10.864 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:10.864 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:10.864 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:10.864 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:10.864 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:10.864 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:10.864 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:10.864 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:10.864 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:10.864 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:10.864 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:10.864 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:10.864 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:10.864 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:10.864 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:10.864 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:10.864 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:10.864 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:10.864 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:10.864 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:10.864 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:10.864 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:10.864 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:10.864 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:10.864 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:10.864 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:10.864 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:10.864 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:10.864 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:10.864 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:10.864 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:10.864 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:10.864 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:10.864 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:10.864 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:10.864 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:10.864 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:11.122 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:11.122 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:11.122 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:11.122 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:11.122 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:11.122 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:11.122 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:11.122 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:11.122 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:11.122 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:11.122 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:11.122 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:11.689 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:11.689 [75/76] Linking static target lib/libxnvme.a 00:06:11.689 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:11.689 INFO: autodetecting backend as ninja 00:06:11.689 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:11.689 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:18.308 The Meson build system 00:06:18.308 Version: 1.5.0 00:06:18.308 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:18.308 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:18.308 Build type: native build 00:06:18.308 Program cat found: YES (/usr/bin/cat) 00:06:18.308 Project name: DPDK 00:06:18.308 Project version: 24.03.0 00:06:18.308 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:18.308 C linker for the host machine: cc ld.bfd 2.40-14 00:06:18.308 Host machine cpu family: x86_64 00:06:18.308 Host machine cpu: x86_64 00:06:18.308 Message: ## Building in Developer Mode ## 00:06:18.308 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:18.308 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:18.308 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:18.308 Program python3 found: YES (/usr/bin/python3) 00:06:18.308 Program cat found: YES (/usr/bin/cat) 00:06:18.308 Compiler for C supports arguments -march=native: YES 00:06:18.308 Checking for size of "void *" : 8 00:06:18.308 Checking for size of "void *" : 8 (cached) 00:06:18.308 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:18.308 Library m found: YES 00:06:18.308 Library numa found: YES 00:06:18.308 Has header "numaif.h" : YES 00:06:18.308 Library fdt found: NO 00:06:18.308 Library execinfo found: NO 00:06:18.308 Has header "execinfo.h" : YES 00:06:18.308 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:18.308 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:18.308 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:18.308 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:18.308 Run-time dependency openssl found: YES 3.1.1 00:06:18.308 Run-time dependency libpcap found: YES 1.10.4 00:06:18.308 Has header "pcap.h" with dependency libpcap: YES 00:06:18.308 Compiler for C supports arguments -Wcast-qual: YES 00:06:18.308 Compiler for C supports arguments -Wdeprecated: YES 00:06:18.308 Compiler for C supports arguments -Wformat: YES 00:06:18.308 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:18.308 Compiler for C supports arguments -Wformat-security: NO 00:06:18.308 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:18.309 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:18.309 Compiler for C supports arguments -Wnested-externs: YES 00:06:18.309 Compiler for C supports arguments -Wold-style-definition: YES 00:06:18.309 Compiler for C supports arguments -Wpointer-arith: YES 00:06:18.309 Compiler for C supports arguments -Wsign-compare: YES 00:06:18.309 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:18.309 Compiler for C supports arguments -Wundef: YES 00:06:18.309 Compiler for C supports arguments -Wwrite-strings: YES 00:06:18.309 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:18.309 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:18.309 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:18.309 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:18.309 Program objdump found: YES (/usr/bin/objdump) 00:06:18.309 Compiler for C supports arguments -mavx512f: YES 00:06:18.309 Checking if "AVX512 checking" compiles: YES 00:06:18.309 Fetching value of define "__SSE4_2__" : 1 00:06:18.309 Fetching value of define "__AES__" : 1 00:06:18.309 Fetching value of define "__AVX__" : 1 00:06:18.309 Fetching value of define "__AVX2__" : 1 00:06:18.309 Fetching value of define "__AVX512BW__" : 1 00:06:18.309 Fetching value of define "__AVX512CD__" : 1 00:06:18.309 Fetching value of define "__AVX512DQ__" : 1 00:06:18.309 Fetching value of define "__AVX512F__" : 1 00:06:18.309 Fetching value of define "__AVX512VL__" : 1 00:06:18.309 Fetching value of define "__PCLMUL__" : 1 00:06:18.309 Fetching value of define "__RDRND__" : 1 00:06:18.309 Fetching value of define "__RDSEED__" : 1 00:06:18.309 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:18.309 Fetching value of define "__znver1__" : (undefined) 00:06:18.309 Fetching value of define "__znver2__" : (undefined) 00:06:18.309 Fetching value of define "__znver3__" : (undefined) 00:06:18.309 Fetching value of define "__znver4__" : (undefined) 00:06:18.309 Library asan found: YES 00:06:18.309 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:18.309 Message: lib/log: Defining dependency "log" 00:06:18.309 Message: lib/kvargs: Defining dependency "kvargs" 00:06:18.309 Message: lib/telemetry: Defining dependency "telemetry" 00:06:18.309 Library rt found: YES 00:06:18.309 Checking for function "getentropy" : NO 00:06:18.309 Message: lib/eal: Defining dependency "eal" 00:06:18.309 Message: lib/ring: Defining dependency "ring" 00:06:18.309 Message: lib/rcu: Defining dependency "rcu" 00:06:18.309 Message: lib/mempool: Defining dependency "mempool" 00:06:18.309 Message: lib/mbuf: Defining dependency "mbuf" 00:06:18.309 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:18.309 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:18.309 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:18.309 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:18.309 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:18.309 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:18.309 Compiler for C supports arguments -mpclmul: YES 00:06:18.309 Compiler for C supports arguments -maes: YES 00:06:18.309 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:18.309 Compiler for C supports arguments -mavx512bw: YES 00:06:18.309 Compiler for C supports arguments -mavx512dq: YES 00:06:18.309 Compiler for C supports arguments -mavx512vl: YES 00:06:18.309 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:18.309 Compiler for C supports arguments -mavx2: YES 00:06:18.309 Compiler for C supports arguments -mavx: YES 00:06:18.309 Message: lib/net: Defining dependency "net" 00:06:18.309 Message: lib/meter: Defining dependency "meter" 00:06:18.309 Message: lib/ethdev: Defining dependency "ethdev" 00:06:18.309 Message: lib/pci: Defining dependency "pci" 00:06:18.309 Message: lib/cmdline: Defining dependency "cmdline" 00:06:18.309 Message: lib/hash: Defining dependency "hash" 00:06:18.309 Message: lib/timer: Defining dependency "timer" 00:06:18.309 Message: lib/compressdev: Defining dependency "compressdev" 00:06:18.309 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:18.309 Message: lib/dmadev: Defining dependency "dmadev" 00:06:18.309 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:18.309 Message: lib/power: Defining dependency "power" 00:06:18.309 Message: lib/reorder: Defining dependency "reorder" 00:06:18.309 Message: lib/security: Defining dependency "security" 00:06:18.309 Has header "linux/userfaultfd.h" : YES 00:06:18.309 Has header "linux/vduse.h" : YES 00:06:18.309 Message: lib/vhost: Defining dependency "vhost" 00:06:18.309 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:18.309 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:18.309 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:18.309 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:18.309 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:18.309 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:18.309 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:18.309 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:18.309 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:18.309 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:18.309 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:18.309 Configuring doxy-api-html.conf using configuration 00:06:18.309 Configuring doxy-api-man.conf using configuration 00:06:18.309 Program mandb found: YES (/usr/bin/mandb) 00:06:18.309 Program sphinx-build found: NO 00:06:18.309 Configuring rte_build_config.h using configuration 00:06:18.309 Message: 00:06:18.309 ================= 00:06:18.309 Applications Enabled 00:06:18.309 ================= 00:06:18.309 00:06:18.309 apps: 00:06:18.309 00:06:18.309 00:06:18.309 Message: 00:06:18.309 ================= 00:06:18.309 Libraries Enabled 00:06:18.309 ================= 00:06:18.309 00:06:18.309 libs: 00:06:18.309 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:18.309 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:18.309 cryptodev, dmadev, power, reorder, security, vhost, 00:06:18.309 00:06:18.309 Message: 00:06:18.309 =============== 00:06:18.309 Drivers Enabled 00:06:18.309 =============== 00:06:18.309 00:06:18.309 common: 00:06:18.309 00:06:18.309 bus: 00:06:18.309 pci, vdev, 00:06:18.309 mempool: 00:06:18.309 ring, 00:06:18.309 dma: 00:06:18.309 00:06:18.309 net: 00:06:18.309 00:06:18.309 crypto: 00:06:18.309 00:06:18.309 compress: 00:06:18.309 00:06:18.309 vdpa: 00:06:18.309 00:06:18.309 00:06:18.309 Message: 00:06:18.309 ================= 00:06:18.309 Content Skipped 00:06:18.309 ================= 00:06:18.309 00:06:18.309 apps: 00:06:18.309 dumpcap: explicitly disabled via build config 00:06:18.309 graph: explicitly disabled via build config 00:06:18.309 pdump: explicitly disabled via build config 00:06:18.309 proc-info: explicitly disabled via build config 00:06:18.309 test-acl: explicitly disabled via build config 00:06:18.309 test-bbdev: explicitly disabled via build config 00:06:18.309 test-cmdline: explicitly disabled via build config 00:06:18.309 test-compress-perf: explicitly disabled via build config 00:06:18.309 test-crypto-perf: explicitly disabled via build config 00:06:18.309 test-dma-perf: explicitly disabled via build config 00:06:18.309 test-eventdev: explicitly disabled via build config 00:06:18.309 test-fib: explicitly disabled via build config 00:06:18.309 test-flow-perf: explicitly disabled via build config 00:06:18.309 test-gpudev: explicitly disabled via build config 00:06:18.309 test-mldev: explicitly disabled via build config 00:06:18.309 test-pipeline: explicitly disabled via build config 00:06:18.309 test-pmd: explicitly disabled via build config 00:06:18.309 test-regex: explicitly disabled via build config 00:06:18.309 test-sad: explicitly disabled via build config 00:06:18.309 test-security-perf: explicitly disabled via build config 00:06:18.309 00:06:18.309 libs: 00:06:18.310 argparse: explicitly disabled via build config 00:06:18.310 metrics: explicitly disabled via build config 00:06:18.310 acl: explicitly disabled via build config 00:06:18.310 bbdev: explicitly disabled via build config 00:06:18.310 bitratestats: explicitly disabled via build config 00:06:18.310 bpf: explicitly disabled via build config 00:06:18.310 cfgfile: explicitly disabled via build config 00:06:18.310 distributor: explicitly disabled via build config 00:06:18.310 efd: explicitly disabled via build config 00:06:18.310 eventdev: explicitly disabled via build config 00:06:18.310 dispatcher: explicitly disabled via build config 00:06:18.310 gpudev: explicitly disabled via build config 00:06:18.310 gro: explicitly disabled via build config 00:06:18.310 gso: explicitly disabled via build config 00:06:18.310 ip_frag: explicitly disabled via build config 00:06:18.310 jobstats: explicitly disabled via build config 00:06:18.310 latencystats: explicitly disabled via build config 00:06:18.310 lpm: explicitly disabled via build config 00:06:18.310 member: explicitly disabled via build config 00:06:18.310 pcapng: explicitly disabled via build config 00:06:18.310 rawdev: explicitly disabled via build config 00:06:18.310 regexdev: explicitly disabled via build config 00:06:18.310 mldev: explicitly disabled via build config 00:06:18.310 rib: explicitly disabled via build config 00:06:18.310 sched: explicitly disabled via build config 00:06:18.310 stack: explicitly disabled via build config 00:06:18.310 ipsec: explicitly disabled via build config 00:06:18.310 pdcp: explicitly disabled via build config 00:06:18.310 fib: explicitly disabled via build config 00:06:18.310 port: explicitly disabled via build config 00:06:18.310 pdump: explicitly disabled via build config 00:06:18.310 table: explicitly disabled via build config 00:06:18.310 pipeline: explicitly disabled via build config 00:06:18.310 graph: explicitly disabled via build config 00:06:18.310 node: explicitly disabled via build config 00:06:18.310 00:06:18.310 drivers: 00:06:18.310 common/cpt: not in enabled drivers build config 00:06:18.310 common/dpaax: not in enabled drivers build config 00:06:18.310 common/iavf: not in enabled drivers build config 00:06:18.310 common/idpf: not in enabled drivers build config 00:06:18.310 common/ionic: not in enabled drivers build config 00:06:18.310 common/mvep: not in enabled drivers build config 00:06:18.310 common/octeontx: not in enabled drivers build config 00:06:18.310 bus/auxiliary: not in enabled drivers build config 00:06:18.310 bus/cdx: not in enabled drivers build config 00:06:18.310 bus/dpaa: not in enabled drivers build config 00:06:18.310 bus/fslmc: not in enabled drivers build config 00:06:18.310 bus/ifpga: not in enabled drivers build config 00:06:18.310 bus/platform: not in enabled drivers build config 00:06:18.310 bus/uacce: not in enabled drivers build config 00:06:18.310 bus/vmbus: not in enabled drivers build config 00:06:18.310 common/cnxk: not in enabled drivers build config 00:06:18.310 common/mlx5: not in enabled drivers build config 00:06:18.310 common/nfp: not in enabled drivers build config 00:06:18.310 common/nitrox: not in enabled drivers build config 00:06:18.310 common/qat: not in enabled drivers build config 00:06:18.310 common/sfc_efx: not in enabled drivers build config 00:06:18.310 mempool/bucket: not in enabled drivers build config 00:06:18.310 mempool/cnxk: not in enabled drivers build config 00:06:18.310 mempool/dpaa: not in enabled drivers build config 00:06:18.310 mempool/dpaa2: not in enabled drivers build config 00:06:18.310 mempool/octeontx: not in enabled drivers build config 00:06:18.310 mempool/stack: not in enabled drivers build config 00:06:18.310 dma/cnxk: not in enabled drivers build config 00:06:18.310 dma/dpaa: not in enabled drivers build config 00:06:18.310 dma/dpaa2: not in enabled drivers build config 00:06:18.310 dma/hisilicon: not in enabled drivers build config 00:06:18.310 dma/idxd: not in enabled drivers build config 00:06:18.310 dma/ioat: not in enabled drivers build config 00:06:18.310 dma/skeleton: not in enabled drivers build config 00:06:18.310 net/af_packet: not in enabled drivers build config 00:06:18.310 net/af_xdp: not in enabled drivers build config 00:06:18.310 net/ark: not in enabled drivers build config 00:06:18.310 net/atlantic: not in enabled drivers build config 00:06:18.310 net/avp: not in enabled drivers build config 00:06:18.310 net/axgbe: not in enabled drivers build config 00:06:18.310 net/bnx2x: not in enabled drivers build config 00:06:18.310 net/bnxt: not in enabled drivers build config 00:06:18.310 net/bonding: not in enabled drivers build config 00:06:18.310 net/cnxk: not in enabled drivers build config 00:06:18.310 net/cpfl: not in enabled drivers build config 00:06:18.310 net/cxgbe: not in enabled drivers build config 00:06:18.310 net/dpaa: not in enabled drivers build config 00:06:18.310 net/dpaa2: not in enabled drivers build config 00:06:18.310 net/e1000: not in enabled drivers build config 00:06:18.310 net/ena: not in enabled drivers build config 00:06:18.310 net/enetc: not in enabled drivers build config 00:06:18.310 net/enetfec: not in enabled drivers build config 00:06:18.310 net/enic: not in enabled drivers build config 00:06:18.310 net/failsafe: not in enabled drivers build config 00:06:18.310 net/fm10k: not in enabled drivers build config 00:06:18.310 net/gve: not in enabled drivers build config 00:06:18.310 net/hinic: not in enabled drivers build config 00:06:18.310 net/hns3: not in enabled drivers build config 00:06:18.310 net/i40e: not in enabled drivers build config 00:06:18.310 net/iavf: not in enabled drivers build config 00:06:18.310 net/ice: not in enabled drivers build config 00:06:18.310 net/idpf: not in enabled drivers build config 00:06:18.310 net/igc: not in enabled drivers build config 00:06:18.310 net/ionic: not in enabled drivers build config 00:06:18.310 net/ipn3ke: not in enabled drivers build config 00:06:18.310 net/ixgbe: not in enabled drivers build config 00:06:18.310 net/mana: not in enabled drivers build config 00:06:18.310 net/memif: not in enabled drivers build config 00:06:18.310 net/mlx4: not in enabled drivers build config 00:06:18.310 net/mlx5: not in enabled drivers build config 00:06:18.310 net/mvneta: not in enabled drivers build config 00:06:18.310 net/mvpp2: not in enabled drivers build config 00:06:18.310 net/netvsc: not in enabled drivers build config 00:06:18.310 net/nfb: not in enabled drivers build config 00:06:18.310 net/nfp: not in enabled drivers build config 00:06:18.310 net/ngbe: not in enabled drivers build config 00:06:18.310 net/null: not in enabled drivers build config 00:06:18.310 net/octeontx: not in enabled drivers build config 00:06:18.310 net/octeon_ep: not in enabled drivers build config 00:06:18.310 net/pcap: not in enabled drivers build config 00:06:18.310 net/pfe: not in enabled drivers build config 00:06:18.310 net/qede: not in enabled drivers build config 00:06:18.310 net/ring: not in enabled drivers build config 00:06:18.310 net/sfc: not in enabled drivers build config 00:06:18.310 net/softnic: not in enabled drivers build config 00:06:18.310 net/tap: not in enabled drivers build config 00:06:18.310 net/thunderx: not in enabled drivers build config 00:06:18.310 net/txgbe: not in enabled drivers build config 00:06:18.310 net/vdev_netvsc: not in enabled drivers build config 00:06:18.310 net/vhost: not in enabled drivers build config 00:06:18.310 net/virtio: not in enabled drivers build config 00:06:18.310 net/vmxnet3: not in enabled drivers build config 00:06:18.310 raw/*: missing internal dependency, "rawdev" 00:06:18.310 crypto/armv8: not in enabled drivers build config 00:06:18.310 crypto/bcmfs: not in enabled drivers build config 00:06:18.310 crypto/caam_jr: not in enabled drivers build config 00:06:18.311 crypto/ccp: not in enabled drivers build config 00:06:18.311 crypto/cnxk: not in enabled drivers build config 00:06:18.311 crypto/dpaa_sec: not in enabled drivers build config 00:06:18.311 crypto/dpaa2_sec: not in enabled drivers build config 00:06:18.311 crypto/ipsec_mb: not in enabled drivers build config 00:06:18.311 crypto/mlx5: not in enabled drivers build config 00:06:18.311 crypto/mvsam: not in enabled drivers build config 00:06:18.311 crypto/nitrox: not in enabled drivers build config 00:06:18.311 crypto/null: not in enabled drivers build config 00:06:18.311 crypto/octeontx: not in enabled drivers build config 00:06:18.311 crypto/openssl: not in enabled drivers build config 00:06:18.311 crypto/scheduler: not in enabled drivers build config 00:06:18.311 crypto/uadk: not in enabled drivers build config 00:06:18.311 crypto/virtio: not in enabled drivers build config 00:06:18.311 compress/isal: not in enabled drivers build config 00:06:18.311 compress/mlx5: not in enabled drivers build config 00:06:18.311 compress/nitrox: not in enabled drivers build config 00:06:18.311 compress/octeontx: not in enabled drivers build config 00:06:18.311 compress/zlib: not in enabled drivers build config 00:06:18.311 regex/*: missing internal dependency, "regexdev" 00:06:18.311 ml/*: missing internal dependency, "mldev" 00:06:18.311 vdpa/ifc: not in enabled drivers build config 00:06:18.311 vdpa/mlx5: not in enabled drivers build config 00:06:18.311 vdpa/nfp: not in enabled drivers build config 00:06:18.311 vdpa/sfc: not in enabled drivers build config 00:06:18.311 event/*: missing internal dependency, "eventdev" 00:06:18.311 baseband/*: missing internal dependency, "bbdev" 00:06:18.311 gpu/*: missing internal dependency, "gpudev" 00:06:18.311 00:06:18.311 00:06:18.569 Build targets in project: 84 00:06:18.569 00:06:18.569 DPDK 24.03.0 00:06:18.569 00:06:18.569 User defined options 00:06:18.569 buildtype : debug 00:06:18.569 default_library : shared 00:06:18.569 libdir : lib 00:06:18.569 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.569 b_sanitize : address 00:06:18.569 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:18.569 c_link_args : 00:06:18.569 cpu_instruction_set: native 00:06:18.569 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:18.569 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:18.569 enable_docs : false 00:06:18.569 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:18.569 enable_kmods : false 00:06:18.569 max_lcores : 128 00:06:18.569 tests : false 00:06:18.569 00:06:18.569 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:19.135 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:19.135 [1/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:19.135 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:19.135 [3/267] Linking static target lib/librte_kvargs.a 00:06:19.393 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:19.393 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:19.393 [6/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:19.393 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:19.393 [8/267] Linking static target lib/librte_log.a 00:06:19.393 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:19.393 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:19.393 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:19.650 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:19.650 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:19.650 [14/267] Linking static target lib/librte_telemetry.a 00:06:19.650 [15/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.650 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:19.650 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:19.908 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:19.908 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:20.167 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:20.167 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:20.167 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:20.167 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:20.167 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.167 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:20.167 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:20.167 [27/267] Linking target lib/librte_log.so.24.1 00:06:20.425 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:20.425 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:20.425 [30/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.425 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:20.425 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:20.683 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:20.683 [34/267] Linking target lib/librte_kvargs.so.24.1 00:06:20.683 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:20.683 [36/267] Linking target lib/librte_telemetry.so.24.1 00:06:20.683 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:20.683 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:20.683 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:20.683 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:20.683 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:20.683 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:20.683 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:20.940 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:20.940 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:20.940 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:20.940 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:20.940 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:20.940 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:21.199 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:21.199 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:21.199 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:21.199 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:21.199 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:21.457 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:21.457 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:21.457 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:21.457 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:21.457 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:21.457 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:21.715 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:21.715 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:21.715 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:21.715 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:21.715 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:21.715 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:21.715 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:21.973 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:21.973 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:21.973 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:22.231 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:22.231 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:22.231 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:22.231 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:22.231 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:22.231 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:22.231 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:22.231 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:22.488 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:22.488 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:22.488 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:22.488 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:22.488 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:22.747 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:22.747 [85/267] Linking static target lib/librte_ring.a 00:06:22.747 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:22.747 [87/267] Linking static target lib/librte_eal.a 00:06:22.747 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:22.747 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:23.005 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:23.005 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:23.005 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:23.005 [93/267] Linking static target lib/librte_rcu.a 00:06:23.005 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:23.005 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:23.005 [96/267] Linking static target lib/librte_mempool.a 00:06:23.005 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.263 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:23.263 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:23.263 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:23.263 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:23.263 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:23.263 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.263 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:23.520 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:23.520 [106/267] Linking static target lib/librte_mbuf.a 00:06:23.520 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:23.520 [108/267] Linking static target lib/librte_meter.a 00:06:23.520 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:23.520 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:23.778 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:23.778 [112/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:23.778 [113/267] Linking static target lib/librte_net.a 00:06:23.778 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:23.778 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.035 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.035 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.035 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:24.035 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:24.035 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:24.312 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.312 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:24.312 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:24.570 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:24.570 [125/267] Linking static target lib/librte_pci.a 00:06:24.570 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:24.570 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:24.570 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:24.570 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:24.571 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:24.571 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:24.571 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:24.828 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:24.828 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:24.828 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:24.828 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.828 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:24.828 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:24.828 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:24.828 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:24.828 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:24.828 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:24.828 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:24.828 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:24.828 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:24.828 [146/267] Linking static target lib/librte_cmdline.a 00:06:25.086 [147/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:25.344 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:25.344 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:25.344 [150/267] Linking static target lib/librte_timer.a 00:06:25.344 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:25.344 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:25.344 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:25.602 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:25.602 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:25.602 [156/267] Linking static target lib/librte_ethdev.a 00:06:25.602 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:25.602 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:25.602 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:25.602 [160/267] Linking static target lib/librte_compressdev.a 00:06:25.893 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:25.893 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.893 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:25.893 [164/267] Linking static target lib/librte_hash.a 00:06:25.893 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:25.893 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:25.893 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:25.893 [168/267] Linking static target lib/librte_dmadev.a 00:06:26.152 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:26.152 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:26.152 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:26.152 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.410 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:26.410 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.410 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:26.410 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:26.410 [177/267] Linking static target lib/librte_cryptodev.a 00:06:26.410 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:26.410 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:26.668 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:26.668 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.668 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:26.668 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:26.668 [184/267] Linking static target lib/librte_power.a 00:06:26.668 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.926 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:26.926 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:26.926 [188/267] Linking static target lib/librte_reorder.a 00:06:26.926 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:26.926 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:27.184 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:27.184 [192/267] Linking static target lib/librte_security.a 00:06:27.441 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.441 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:27.699 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.699 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.699 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:27.699 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:27.699 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:27.956 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:27.956 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:27.956 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:28.213 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:28.213 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:28.213 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:28.213 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:28.213 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:28.470 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:28.470 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:28.470 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:28.470 [211/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.470 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:28.470 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:28.470 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:28.470 [215/267] Linking static target drivers/librte_bus_vdev.a 00:06:28.470 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:28.470 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:28.470 [218/267] Linking static target drivers/librte_bus_pci.a 00:06:28.731 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:28.731 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:28.731 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:28.731 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:28.731 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:28.731 [224/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.731 [225/267] Linking static target drivers/librte_mempool_ring.a 00:06:28.989 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:29.247 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:30.620 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.620 [229/267] Linking target lib/librte_eal.so.24.1 00:06:30.620 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:30.877 [231/267] Linking target lib/librte_meter.so.24.1 00:06:30.877 [232/267] Linking target lib/librte_pci.so.24.1 00:06:30.877 [233/267] Linking target lib/librte_ring.so.24.1 00:06:30.877 [234/267] Linking target lib/librte_dmadev.so.24.1 00:06:30.877 [235/267] Linking target lib/librte_timer.so.24.1 00:06:30.877 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:30.878 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:30.878 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:30.878 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:30.878 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:30.878 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:30.878 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:30.878 [243/267] Linking target lib/librte_rcu.so.24.1 00:06:30.878 [244/267] Linking target lib/librte_mempool.so.24.1 00:06:31.136 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:31.136 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:31.136 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:31.136 [248/267] Linking target lib/librte_mbuf.so.24.1 00:06:31.136 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:31.136 [250/267] Linking target lib/librte_net.so.24.1 00:06:31.136 [251/267] Linking target lib/librte_compressdev.so.24.1 00:06:31.136 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:06:31.136 [253/267] Linking target lib/librte_reorder.so.24.1 00:06:31.393 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:31.393 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:31.393 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.393 [257/267] Linking target lib/librte_hash.so.24.1 00:06:31.393 [258/267] Linking target lib/librte_security.so.24.1 00:06:31.393 [259/267] Linking target lib/librte_cmdline.so.24.1 00:06:31.393 [260/267] Linking target lib/librte_ethdev.so.24.1 00:06:31.393 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:31.651 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:31.651 [263/267] Linking target lib/librte_power.so.24.1 00:06:32.216 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:32.216 [265/267] Linking static target lib/librte_vhost.a 00:06:33.587 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.587 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:33.587 INFO: autodetecting backend as ninja 00:06:33.587 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:48.517 CC lib/ut/ut.o 00:06:48.517 CC lib/log/log_deprecated.o 00:06:48.517 CC lib/log/log.o 00:06:48.517 CC lib/log/log_flags.o 00:06:48.517 CC lib/ut_mock/mock.o 00:06:48.517 LIB libspdk_ut_mock.a 00:06:48.517 LIB libspdk_log.a 00:06:48.517 SO libspdk_ut_mock.so.6.0 00:06:48.517 SO libspdk_log.so.7.1 00:06:48.517 LIB libspdk_ut.a 00:06:48.517 SYMLINK libspdk_ut_mock.so 00:06:48.517 SYMLINK libspdk_log.so 00:06:48.517 SO libspdk_ut.so.2.0 00:06:48.517 SYMLINK libspdk_ut.so 00:06:48.517 CC lib/dma/dma.o 00:06:48.517 CC lib/util/base64.o 00:06:48.517 CC lib/util/bit_array.o 00:06:48.517 CC lib/util/cpuset.o 00:06:48.517 CC lib/util/crc32.o 00:06:48.517 CC lib/util/crc16.o 00:06:48.517 CC lib/util/crc32c.o 00:06:48.517 CC lib/ioat/ioat.o 00:06:48.517 CXX lib/trace_parser/trace.o 00:06:48.517 CC lib/vfio_user/host/vfio_user_pci.o 00:06:48.517 CC lib/util/crc32_ieee.o 00:06:48.517 CC lib/util/crc64.o 00:06:48.517 CC lib/util/dif.o 00:06:48.517 CC lib/util/fd.o 00:06:48.517 LIB libspdk_dma.a 00:06:48.517 CC lib/util/fd_group.o 00:06:48.517 CC lib/util/file.o 00:06:48.517 SO libspdk_dma.so.5.0 00:06:48.517 CC lib/util/hexlify.o 00:06:48.517 CC lib/vfio_user/host/vfio_user.o 00:06:48.517 SYMLINK libspdk_dma.so 00:06:48.517 CC lib/util/iov.o 00:06:48.517 CC lib/util/math.o 00:06:48.517 LIB libspdk_ioat.a 00:06:48.517 SO libspdk_ioat.so.7.0 00:06:48.517 CC lib/util/net.o 00:06:48.517 CC lib/util/pipe.o 00:06:48.517 CC lib/util/strerror_tls.o 00:06:48.517 SYMLINK libspdk_ioat.so 00:06:48.517 CC lib/util/string.o 00:06:48.517 CC lib/util/uuid.o 00:06:48.517 LIB libspdk_vfio_user.a 00:06:48.517 CC lib/util/xor.o 00:06:48.517 CC lib/util/zipf.o 00:06:48.517 SO libspdk_vfio_user.so.5.0 00:06:48.517 CC lib/util/md5.o 00:06:48.517 SYMLINK libspdk_vfio_user.so 00:06:48.517 LIB libspdk_trace_parser.a 00:06:48.517 LIB libspdk_util.a 00:06:48.517 SO libspdk_trace_parser.so.6.0 00:06:48.517 SYMLINK libspdk_trace_parser.so 00:06:48.517 SO libspdk_util.so.10.1 00:06:48.517 SYMLINK libspdk_util.so 00:06:48.775 CC lib/conf/conf.o 00:06:48.775 CC lib/rdma_utils/rdma_utils.o 00:06:48.775 CC lib/vmd/led.o 00:06:48.775 CC lib/idxd/idxd.o 00:06:48.775 CC lib/vmd/vmd.o 00:06:48.775 CC lib/idxd/idxd_user.o 00:06:48.775 CC lib/env_dpdk/env.o 00:06:48.775 CC lib/env_dpdk/memory.o 00:06:48.775 CC lib/idxd/idxd_kernel.o 00:06:48.775 CC lib/json/json_parse.o 00:06:49.032 CC lib/json/json_util.o 00:06:49.032 CC lib/json/json_write.o 00:06:49.032 LIB libspdk_conf.a 00:06:49.032 SO libspdk_conf.so.6.0 00:06:49.032 CC lib/env_dpdk/pci.o 00:06:49.032 CC lib/env_dpdk/init.o 00:06:49.032 LIB libspdk_rdma_utils.a 00:06:49.032 SYMLINK libspdk_conf.so 00:06:49.032 CC lib/env_dpdk/threads.o 00:06:49.032 SO libspdk_rdma_utils.so.1.0 00:06:49.032 SYMLINK libspdk_rdma_utils.so 00:06:49.032 CC lib/env_dpdk/pci_ioat.o 00:06:49.032 CC lib/env_dpdk/pci_virtio.o 00:06:49.289 CC lib/env_dpdk/pci_vmd.o 00:06:49.289 LIB libspdk_json.a 00:06:49.289 CC lib/env_dpdk/pci_idxd.o 00:06:49.289 SO libspdk_json.so.6.0 00:06:49.289 CC lib/env_dpdk/pci_event.o 00:06:49.289 LIB libspdk_vmd.a 00:06:49.289 CC lib/env_dpdk/sigbus_handler.o 00:06:49.289 SYMLINK libspdk_json.so 00:06:49.289 CC lib/env_dpdk/pci_dpdk.o 00:06:49.289 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:49.289 SO libspdk_vmd.so.6.0 00:06:49.289 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:49.289 SYMLINK libspdk_vmd.so 00:06:49.289 LIB libspdk_idxd.a 00:06:49.589 SO libspdk_idxd.so.12.1 00:06:49.589 CC lib/jsonrpc/jsonrpc_server.o 00:06:49.589 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:49.589 CC lib/jsonrpc/jsonrpc_client.o 00:06:49.589 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:49.589 CC lib/rdma_provider/common.o 00:06:49.589 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:49.589 SYMLINK libspdk_idxd.so 00:06:49.849 LIB libspdk_rdma_provider.a 00:06:49.849 SO libspdk_rdma_provider.so.7.0 00:06:49.849 LIB libspdk_jsonrpc.a 00:06:49.849 SO libspdk_jsonrpc.so.6.0 00:06:49.849 SYMLINK libspdk_rdma_provider.so 00:06:49.849 SYMLINK libspdk_jsonrpc.so 00:06:50.110 CC lib/rpc/rpc.o 00:06:50.368 LIB libspdk_rpc.a 00:06:50.368 LIB libspdk_env_dpdk.a 00:06:50.368 SO libspdk_rpc.so.6.0 00:06:50.368 SO libspdk_env_dpdk.so.15.1 00:06:50.368 SYMLINK libspdk_rpc.so 00:06:50.368 SYMLINK libspdk_env_dpdk.so 00:06:50.628 CC lib/notify/notify_rpc.o 00:06:50.628 CC lib/notify/notify.o 00:06:50.628 CC lib/trace/trace.o 00:06:50.628 CC lib/trace/trace_flags.o 00:06:50.628 CC lib/trace/trace_rpc.o 00:06:50.628 CC lib/keyring/keyring.o 00:06:50.628 CC lib/keyring/keyring_rpc.o 00:06:50.628 LIB libspdk_notify.a 00:06:50.628 SO libspdk_notify.so.6.0 00:06:50.891 SYMLINK libspdk_notify.so 00:06:50.891 LIB libspdk_keyring.a 00:06:50.891 LIB libspdk_trace.a 00:06:50.891 SO libspdk_keyring.so.2.0 00:06:50.891 SO libspdk_trace.so.11.0 00:06:50.891 SYMLINK libspdk_keyring.so 00:06:50.891 SYMLINK libspdk_trace.so 00:06:51.150 CC lib/sock/sock.o 00:06:51.150 CC lib/sock/sock_rpc.o 00:06:51.150 CC lib/thread/thread.o 00:06:51.150 CC lib/thread/iobuf.o 00:06:51.410 LIB libspdk_sock.a 00:06:51.670 SO libspdk_sock.so.10.0 00:06:51.670 SYMLINK libspdk_sock.so 00:06:51.928 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:51.928 CC lib/nvme/nvme_ctrlr.o 00:06:51.928 CC lib/nvme/nvme_ns_cmd.o 00:06:51.928 CC lib/nvme/nvme_fabric.o 00:06:51.928 CC lib/nvme/nvme_ns.o 00:06:51.928 CC lib/nvme/nvme_pcie_common.o 00:06:51.928 CC lib/nvme/nvme_qpair.o 00:06:51.928 CC lib/nvme/nvme_pcie.o 00:06:51.928 CC lib/nvme/nvme.o 00:06:52.493 CC lib/nvme/nvme_quirks.o 00:06:52.493 CC lib/nvme/nvme_transport.o 00:06:52.493 CC lib/nvme/nvme_discovery.o 00:06:52.493 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:52.751 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:52.751 CC lib/nvme/nvme_tcp.o 00:06:52.751 CC lib/nvme/nvme_opal.o 00:06:52.751 CC lib/nvme/nvme_io_msg.o 00:06:52.751 LIB libspdk_thread.a 00:06:52.751 SO libspdk_thread.so.11.0 00:06:52.751 SYMLINK libspdk_thread.so 00:06:53.009 CC lib/nvme/nvme_poll_group.o 00:06:53.009 CC lib/accel/accel.o 00:06:53.009 CC lib/nvme/nvme_zns.o 00:06:53.009 CC lib/nvme/nvme_stubs.o 00:06:53.009 CC lib/nvme/nvme_auth.o 00:06:53.009 CC lib/nvme/nvme_cuse.o 00:06:53.267 CC lib/blob/blobstore.o 00:06:53.267 CC lib/init/json_config.o 00:06:53.525 CC lib/blob/request.o 00:06:53.525 CC lib/blob/zeroes.o 00:06:53.525 CC lib/init/subsystem.o 00:06:53.525 CC lib/blob/blob_bs_dev.o 00:06:53.782 CC lib/nvme/nvme_rdma.o 00:06:53.782 CC lib/init/subsystem_rpc.o 00:06:53.782 CC lib/init/rpc.o 00:06:53.782 CC lib/accel/accel_rpc.o 00:06:53.782 LIB libspdk_init.a 00:06:54.040 CC lib/accel/accel_sw.o 00:06:54.040 SO libspdk_init.so.6.0 00:06:54.040 CC lib/virtio/virtio.o 00:06:54.040 CC lib/fsdev/fsdev.o 00:06:54.040 SYMLINK libspdk_init.so 00:06:54.040 CC lib/fsdev/fsdev_io.o 00:06:54.040 CC lib/fsdev/fsdev_rpc.o 00:06:54.040 CC lib/virtio/virtio_vhost_user.o 00:06:54.040 CC lib/virtio/virtio_vfio_user.o 00:06:54.298 CC lib/virtio/virtio_pci.o 00:06:54.298 LIB libspdk_accel.a 00:06:54.298 SO libspdk_accel.so.16.0 00:06:54.298 SYMLINK libspdk_accel.so 00:06:54.298 CC lib/event/app.o 00:06:54.298 CC lib/event/reactor.o 00:06:54.298 CC lib/event/log_rpc.o 00:06:54.298 CC lib/event/app_rpc.o 00:06:54.298 CC lib/event/scheduler_static.o 00:06:54.554 CC lib/bdev/bdev_rpc.o 00:06:54.555 CC lib/bdev/bdev.o 00:06:54.555 LIB libspdk_virtio.a 00:06:54.555 CC lib/bdev/bdev_zone.o 00:06:54.555 CC lib/bdev/part.o 00:06:54.555 SO libspdk_virtio.so.7.0 00:06:54.555 LIB libspdk_fsdev.a 00:06:54.555 SO libspdk_fsdev.so.2.0 00:06:54.555 SYMLINK libspdk_virtio.so 00:06:54.813 CC lib/bdev/scsi_nvme.o 00:06:54.813 SYMLINK libspdk_fsdev.so 00:06:54.813 LIB libspdk_event.a 00:06:54.813 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:54.813 SO libspdk_event.so.14.0 00:06:54.813 SYMLINK libspdk_event.so 00:06:55.072 LIB libspdk_nvme.a 00:06:55.333 SO libspdk_nvme.so.15.0 00:06:55.597 SYMLINK libspdk_nvme.so 00:06:55.597 LIB libspdk_fuse_dispatcher.a 00:06:55.597 SO libspdk_fuse_dispatcher.so.1.0 00:06:55.597 SYMLINK libspdk_fuse_dispatcher.so 00:06:56.533 LIB libspdk_blob.a 00:06:56.533 SO libspdk_blob.so.11.0 00:06:56.789 SYMLINK libspdk_blob.so 00:06:57.046 CC lib/blobfs/blobfs.o 00:06:57.046 CC lib/blobfs/tree.o 00:06:57.046 CC lib/lvol/lvol.o 00:06:57.611 LIB libspdk_bdev.a 00:06:57.611 SO libspdk_bdev.so.17.0 00:06:57.611 SYMLINK libspdk_bdev.so 00:06:57.869 LIB libspdk_blobfs.a 00:06:57.869 SO libspdk_blobfs.so.10.0 00:06:57.869 CC lib/nbd/nbd.o 00:06:57.869 CC lib/nbd/nbd_rpc.o 00:06:57.870 CC lib/ftl/ftl_init.o 00:06:57.870 CC lib/scsi/dev.o 00:06:57.870 CC lib/ftl/ftl_core.o 00:06:57.870 CC lib/scsi/lun.o 00:06:57.870 CC lib/ublk/ublk.o 00:06:57.870 SYMLINK libspdk_blobfs.so 00:06:57.870 CC lib/nvmf/ctrlr.o 00:06:57.870 CC lib/scsi/port.o 00:06:57.870 LIB libspdk_lvol.a 00:06:57.870 SO libspdk_lvol.so.10.0 00:06:57.870 SYMLINK libspdk_lvol.so 00:06:57.870 CC lib/scsi/scsi.o 00:06:57.870 CC lib/ftl/ftl_layout.o 00:06:57.870 CC lib/nvmf/ctrlr_discovery.o 00:06:58.128 CC lib/scsi/scsi_bdev.o 00:06:58.128 CC lib/scsi/scsi_pr.o 00:06:58.128 CC lib/scsi/scsi_rpc.o 00:06:58.128 CC lib/scsi/task.o 00:06:58.128 CC lib/nvmf/ctrlr_bdev.o 00:06:58.128 LIB libspdk_nbd.a 00:06:58.128 CC lib/ftl/ftl_debug.o 00:06:58.384 SO libspdk_nbd.so.7.0 00:06:58.384 CC lib/ublk/ublk_rpc.o 00:06:58.384 SYMLINK libspdk_nbd.so 00:06:58.384 CC lib/nvmf/subsystem.o 00:06:58.384 CC lib/nvmf/nvmf.o 00:06:58.384 CC lib/nvmf/nvmf_rpc.o 00:06:58.384 LIB libspdk_scsi.a 00:06:58.384 SO libspdk_scsi.so.9.0 00:06:58.384 CC lib/ftl/ftl_io.o 00:06:58.384 CC lib/ftl/ftl_sb.o 00:06:58.384 CC lib/ftl/ftl_l2p.o 00:06:58.384 LIB libspdk_ublk.a 00:06:58.639 SYMLINK libspdk_scsi.so 00:06:58.639 CC lib/ftl/ftl_l2p_flat.o 00:06:58.639 SO libspdk_ublk.so.3.0 00:06:58.639 SYMLINK libspdk_ublk.so 00:06:58.639 CC lib/ftl/ftl_nv_cache.o 00:06:58.639 CC lib/ftl/ftl_band.o 00:06:58.639 CC lib/ftl/ftl_band_ops.o 00:06:58.639 CC lib/iscsi/conn.o 00:06:58.639 CC lib/vhost/vhost.o 00:06:58.896 CC lib/vhost/vhost_rpc.o 00:06:58.897 CC lib/vhost/vhost_scsi.o 00:06:59.154 CC lib/vhost/vhost_blk.o 00:06:59.154 CC lib/vhost/rte_vhost_user.o 00:06:59.154 CC lib/nvmf/transport.o 00:06:59.411 CC lib/iscsi/init_grp.o 00:06:59.411 CC lib/iscsi/iscsi.o 00:06:59.686 CC lib/nvmf/tcp.o 00:06:59.686 CC lib/nvmf/stubs.o 00:06:59.686 CC lib/nvmf/mdns_server.o 00:06:59.686 CC lib/nvmf/rdma.o 00:06:59.686 CC lib/ftl/ftl_writer.o 00:06:59.686 CC lib/ftl/ftl_rq.o 00:06:59.976 CC lib/iscsi/param.o 00:06:59.976 CC lib/ftl/ftl_reloc.o 00:06:59.976 CC lib/ftl/ftl_l2p_cache.o 00:06:59.976 CC lib/ftl/ftl_p2l.o 00:06:59.976 CC lib/ftl/ftl_p2l_log.o 00:06:59.976 CC lib/ftl/mngt/ftl_mngt.o 00:07:00.234 LIB libspdk_vhost.a 00:07:00.234 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:00.234 SO libspdk_vhost.so.8.0 00:07:00.234 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:00.234 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:00.234 CC lib/nvmf/auth.o 00:07:00.234 CC lib/iscsi/portal_grp.o 00:07:00.234 SYMLINK libspdk_vhost.so 00:07:00.234 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:00.491 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:00.491 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:00.491 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:00.491 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:00.491 CC lib/iscsi/tgt_node.o 00:07:00.749 CC lib/iscsi/iscsi_subsystem.o 00:07:00.749 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:00.749 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:00.749 CC lib/iscsi/iscsi_rpc.o 00:07:00.749 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:00.749 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:00.749 CC lib/iscsi/task.o 00:07:01.007 CC lib/ftl/utils/ftl_conf.o 00:07:01.007 CC lib/ftl/utils/ftl_md.o 00:07:01.007 CC lib/ftl/utils/ftl_mempool.o 00:07:01.007 CC lib/ftl/utils/ftl_bitmap.o 00:07:01.007 CC lib/ftl/utils/ftl_property.o 00:07:01.007 LIB libspdk_iscsi.a 00:07:01.264 SO libspdk_iscsi.so.8.0 00:07:01.265 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:01.265 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:01.265 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:01.265 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:01.265 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:01.265 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:01.265 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:01.265 SYMLINK libspdk_iscsi.so 00:07:01.265 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:01.265 LIB libspdk_nvmf.a 00:07:01.265 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:01.265 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:01.524 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:01.524 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:01.524 CC lib/ftl/base/ftl_base_dev.o 00:07:01.524 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:01.524 CC lib/ftl/base/ftl_base_bdev.o 00:07:01.524 SO libspdk_nvmf.so.20.0 00:07:01.524 CC lib/ftl/ftl_trace.o 00:07:01.783 LIB libspdk_ftl.a 00:07:01.783 SYMLINK libspdk_nvmf.so 00:07:01.783 SO libspdk_ftl.so.9.0 00:07:02.040 SYMLINK libspdk_ftl.so 00:07:02.296 CC module/env_dpdk/env_dpdk_rpc.o 00:07:02.296 CC module/fsdev/aio/fsdev_aio.o 00:07:02.296 CC module/accel/ioat/accel_ioat.o 00:07:02.296 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:02.296 CC module/sock/posix/posix.o 00:07:02.296 CC module/accel/iaa/accel_iaa.o 00:07:02.553 CC module/accel/error/accel_error.o 00:07:02.553 CC module/accel/dsa/accel_dsa.o 00:07:02.553 CC module/blob/bdev/blob_bdev.o 00:07:02.553 CC module/keyring/file/keyring.o 00:07:02.553 LIB libspdk_env_dpdk_rpc.a 00:07:02.553 SO libspdk_env_dpdk_rpc.so.6.0 00:07:02.553 SYMLINK libspdk_env_dpdk_rpc.so 00:07:02.553 CC module/keyring/file/keyring_rpc.o 00:07:02.553 CC module/accel/error/accel_error_rpc.o 00:07:02.553 CC module/accel/ioat/accel_ioat_rpc.o 00:07:02.553 CC module/accel/iaa/accel_iaa_rpc.o 00:07:02.553 LIB libspdk_scheduler_dynamic.a 00:07:02.553 SO libspdk_scheduler_dynamic.so.4.0 00:07:02.553 LIB libspdk_keyring_file.a 00:07:02.553 SO libspdk_keyring_file.so.2.0 00:07:02.811 LIB libspdk_blob_bdev.a 00:07:02.811 LIB libspdk_accel_error.a 00:07:02.811 SYMLINK libspdk_scheduler_dynamic.so 00:07:02.811 SYMLINK libspdk_keyring_file.so 00:07:02.811 LIB libspdk_accel_ioat.a 00:07:02.811 SO libspdk_blob_bdev.so.11.0 00:07:02.811 LIB libspdk_accel_iaa.a 00:07:02.811 SO libspdk_accel_error.so.2.0 00:07:02.811 CC module/accel/dsa/accel_dsa_rpc.o 00:07:02.811 SO libspdk_accel_ioat.so.6.0 00:07:02.811 SO libspdk_accel_iaa.so.3.0 00:07:02.811 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:02.811 SYMLINK libspdk_blob_bdev.so 00:07:02.811 SYMLINK libspdk_accel_ioat.so 00:07:02.811 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:02.811 SYMLINK libspdk_accel_error.so 00:07:02.811 CC module/fsdev/aio/linux_aio_mgr.o 00:07:02.811 SYMLINK libspdk_accel_iaa.so 00:07:02.811 CC module/keyring/linux/keyring.o 00:07:02.812 CC module/scheduler/gscheduler/gscheduler.o 00:07:02.812 LIB libspdk_accel_dsa.a 00:07:02.812 SO libspdk_accel_dsa.so.5.0 00:07:02.812 LIB libspdk_scheduler_dpdk_governor.a 00:07:02.812 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:03.070 SYMLINK libspdk_accel_dsa.so 00:07:03.070 CC module/keyring/linux/keyring_rpc.o 00:07:03.070 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:03.070 LIB libspdk_scheduler_gscheduler.a 00:07:03.070 SO libspdk_scheduler_gscheduler.so.4.0 00:07:03.070 CC module/blobfs/bdev/blobfs_bdev.o 00:07:03.070 CC module/bdev/delay/vbdev_delay.o 00:07:03.070 LIB libspdk_keyring_linux.a 00:07:03.070 SYMLINK libspdk_scheduler_gscheduler.so 00:07:03.070 SO libspdk_keyring_linux.so.1.0 00:07:03.070 LIB libspdk_fsdev_aio.a 00:07:03.070 CC module/bdev/lvol/vbdev_lvol.o 00:07:03.070 CC module/bdev/gpt/gpt.o 00:07:03.070 CC module/bdev/malloc/bdev_malloc.o 00:07:03.070 CC module/bdev/error/vbdev_error.o 00:07:03.070 SYMLINK libspdk_keyring_linux.so 00:07:03.070 CC module/bdev/gpt/vbdev_gpt.o 00:07:03.070 SO libspdk_fsdev_aio.so.1.0 00:07:03.070 LIB libspdk_sock_posix.a 00:07:03.328 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:03.328 SO libspdk_sock_posix.so.6.0 00:07:03.328 SYMLINK libspdk_fsdev_aio.so 00:07:03.328 CC module/bdev/null/bdev_null.o 00:07:03.328 CC module/bdev/null/bdev_null_rpc.o 00:07:03.328 SYMLINK libspdk_sock_posix.so 00:07:03.328 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:03.328 CC module/bdev/error/vbdev_error_rpc.o 00:07:03.328 LIB libspdk_blobfs_bdev.a 00:07:03.328 LIB libspdk_bdev_gpt.a 00:07:03.328 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:03.328 SO libspdk_blobfs_bdev.so.6.0 00:07:03.328 SO libspdk_bdev_gpt.so.6.0 00:07:03.328 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:03.586 SYMLINK libspdk_bdev_gpt.so 00:07:03.586 SYMLINK libspdk_blobfs_bdev.so 00:07:03.586 LIB libspdk_bdev_error.a 00:07:03.586 SO libspdk_bdev_error.so.6.0 00:07:03.586 LIB libspdk_bdev_null.a 00:07:03.586 SO libspdk_bdev_null.so.6.0 00:07:03.586 LIB libspdk_bdev_delay.a 00:07:03.586 LIB libspdk_bdev_malloc.a 00:07:03.586 SYMLINK libspdk_bdev_error.so 00:07:03.586 SO libspdk_bdev_delay.so.6.0 00:07:03.586 CC module/bdev/nvme/bdev_nvme.o 00:07:03.586 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:03.586 SO libspdk_bdev_malloc.so.6.0 00:07:03.586 CC module/bdev/passthru/vbdev_passthru.o 00:07:03.586 CC module/bdev/raid/bdev_raid.o 00:07:03.586 SYMLINK libspdk_bdev_null.so 00:07:03.586 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:03.586 SYMLINK libspdk_bdev_delay.so 00:07:03.586 SYMLINK libspdk_bdev_malloc.so 00:07:03.586 CC module/bdev/split/vbdev_split.o 00:07:03.586 LIB libspdk_bdev_lvol.a 00:07:03.843 SO libspdk_bdev_lvol.so.6.0 00:07:03.843 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:03.843 CC module/bdev/xnvme/bdev_xnvme.o 00:07:03.843 CC module/bdev/aio/bdev_aio.o 00:07:03.843 SYMLINK libspdk_bdev_lvol.so 00:07:03.843 CC module/bdev/aio/bdev_aio_rpc.o 00:07:03.843 CC module/bdev/ftl/bdev_ftl.o 00:07:03.843 CC module/bdev/split/vbdev_split_rpc.o 00:07:03.843 LIB libspdk_bdev_passthru.a 00:07:03.843 SO libspdk_bdev_passthru.so.6.0 00:07:03.843 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:04.102 SYMLINK libspdk_bdev_passthru.so 00:07:04.102 LIB libspdk_bdev_split.a 00:07:04.102 LIB libspdk_bdev_xnvme.a 00:07:04.102 SO libspdk_bdev_split.so.6.0 00:07:04.102 CC module/bdev/iscsi/bdev_iscsi.o 00:07:04.102 SO libspdk_bdev_xnvme.so.3.0 00:07:04.102 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:04.102 SYMLINK libspdk_bdev_split.so 00:07:04.102 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:04.102 LIB libspdk_bdev_aio.a 00:07:04.102 SYMLINK libspdk_bdev_xnvme.so 00:07:04.102 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:04.102 CC module/bdev/nvme/nvme_rpc.o 00:07:04.102 SO libspdk_bdev_aio.so.6.0 00:07:04.102 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:04.102 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:04.360 SYMLINK libspdk_bdev_aio.so 00:07:04.360 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:04.360 LIB libspdk_bdev_zone_block.a 00:07:04.360 CC module/bdev/nvme/bdev_mdns_client.o 00:07:04.360 SO libspdk_bdev_zone_block.so.6.0 00:07:04.360 SYMLINK libspdk_bdev_zone_block.so 00:07:04.360 CC module/bdev/raid/bdev_raid_rpc.o 00:07:04.360 LIB libspdk_bdev_ftl.a 00:07:04.360 SO libspdk_bdev_ftl.so.6.0 00:07:04.360 CC module/bdev/nvme/vbdev_opal.o 00:07:04.360 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:04.360 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:04.360 LIB libspdk_bdev_iscsi.a 00:07:04.360 SYMLINK libspdk_bdev_ftl.so 00:07:04.360 CC module/bdev/raid/bdev_raid_sb.o 00:07:04.360 SO libspdk_bdev_iscsi.so.6.0 00:07:04.617 CC module/bdev/raid/raid0.o 00:07:04.617 SYMLINK libspdk_bdev_iscsi.so 00:07:04.617 CC module/bdev/raid/raid1.o 00:07:04.617 CC module/bdev/raid/concat.o 00:07:04.617 LIB libspdk_bdev_virtio.a 00:07:04.875 SO libspdk_bdev_virtio.so.6.0 00:07:04.875 SYMLINK libspdk_bdev_virtio.so 00:07:04.875 LIB libspdk_bdev_raid.a 00:07:04.875 SO libspdk_bdev_raid.so.6.0 00:07:05.135 SYMLINK libspdk_bdev_raid.so 00:07:06.513 LIB libspdk_bdev_nvme.a 00:07:06.513 SO libspdk_bdev_nvme.so.7.1 00:07:06.513 SYMLINK libspdk_bdev_nvme.so 00:07:06.772 CC module/event/subsystems/iobuf/iobuf.o 00:07:06.772 CC module/event/subsystems/sock/sock.o 00:07:06.772 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:06.772 CC module/event/subsystems/vmd/vmd.o 00:07:06.772 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:06.772 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:06.772 CC module/event/subsystems/keyring/keyring.o 00:07:06.772 CC module/event/subsystems/fsdev/fsdev.o 00:07:06.772 CC module/event/subsystems/scheduler/scheduler.o 00:07:07.030 LIB libspdk_event_keyring.a 00:07:07.030 LIB libspdk_event_scheduler.a 00:07:07.030 SO libspdk_event_keyring.so.1.0 00:07:07.030 LIB libspdk_event_vhost_blk.a 00:07:07.030 LIB libspdk_event_fsdev.a 00:07:07.030 LIB libspdk_event_sock.a 00:07:07.030 SO libspdk_event_scheduler.so.4.0 00:07:07.030 LIB libspdk_event_iobuf.a 00:07:07.030 LIB libspdk_event_vmd.a 00:07:07.030 SO libspdk_event_vhost_blk.so.3.0 00:07:07.030 SO libspdk_event_sock.so.5.0 00:07:07.030 SO libspdk_event_fsdev.so.1.0 00:07:07.030 SO libspdk_event_iobuf.so.3.0 00:07:07.030 SO libspdk_event_vmd.so.6.0 00:07:07.030 SYMLINK libspdk_event_keyring.so 00:07:07.030 SYMLINK libspdk_event_scheduler.so 00:07:07.030 SYMLINK libspdk_event_fsdev.so 00:07:07.030 SYMLINK libspdk_event_sock.so 00:07:07.030 SYMLINK libspdk_event_vhost_blk.so 00:07:07.030 SYMLINK libspdk_event_vmd.so 00:07:07.030 SYMLINK libspdk_event_iobuf.so 00:07:07.287 CC module/event/subsystems/accel/accel.o 00:07:07.599 LIB libspdk_event_accel.a 00:07:07.599 SO libspdk_event_accel.so.6.0 00:07:07.599 SYMLINK libspdk_event_accel.so 00:07:07.861 CC module/event/subsystems/bdev/bdev.o 00:07:07.861 LIB libspdk_event_bdev.a 00:07:07.861 SO libspdk_event_bdev.so.6.0 00:07:08.119 SYMLINK libspdk_event_bdev.so 00:07:08.119 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:08.119 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:08.119 CC module/event/subsystems/scsi/scsi.o 00:07:08.119 CC module/event/subsystems/nbd/nbd.o 00:07:08.119 CC module/event/subsystems/ublk/ublk.o 00:07:08.377 LIB libspdk_event_ublk.a 00:07:08.377 LIB libspdk_event_nbd.a 00:07:08.377 LIB libspdk_event_scsi.a 00:07:08.377 SO libspdk_event_nbd.so.6.0 00:07:08.377 SO libspdk_event_ublk.so.3.0 00:07:08.377 SO libspdk_event_scsi.so.6.0 00:07:08.377 SYMLINK libspdk_event_ublk.so 00:07:08.377 SYMLINK libspdk_event_nbd.so 00:07:08.377 SYMLINK libspdk_event_scsi.so 00:07:08.377 LIB libspdk_event_nvmf.a 00:07:08.377 SO libspdk_event_nvmf.so.6.0 00:07:08.377 SYMLINK libspdk_event_nvmf.so 00:07:08.634 CC module/event/subsystems/iscsi/iscsi.o 00:07:08.634 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:08.634 LIB libspdk_event_vhost_scsi.a 00:07:08.634 LIB libspdk_event_iscsi.a 00:07:08.634 SO libspdk_event_vhost_scsi.so.3.0 00:07:08.634 SO libspdk_event_iscsi.so.6.0 00:07:08.895 SYMLINK libspdk_event_vhost_scsi.so 00:07:08.895 SYMLINK libspdk_event_iscsi.so 00:07:08.895 SO libspdk.so.6.0 00:07:08.895 SYMLINK libspdk.so 00:07:09.155 CC test/rpc_client/rpc_client_test.o 00:07:09.155 CXX app/trace/trace.o 00:07:09.155 TEST_HEADER include/spdk/accel.h 00:07:09.155 TEST_HEADER include/spdk/accel_module.h 00:07:09.155 TEST_HEADER include/spdk/assert.h 00:07:09.155 TEST_HEADER include/spdk/barrier.h 00:07:09.155 TEST_HEADER include/spdk/base64.h 00:07:09.155 TEST_HEADER include/spdk/bdev.h 00:07:09.155 TEST_HEADER include/spdk/bdev_module.h 00:07:09.155 TEST_HEADER include/spdk/bdev_zone.h 00:07:09.155 TEST_HEADER include/spdk/bit_array.h 00:07:09.155 TEST_HEADER include/spdk/bit_pool.h 00:07:09.155 TEST_HEADER include/spdk/blob_bdev.h 00:07:09.155 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:09.155 TEST_HEADER include/spdk/blobfs.h 00:07:09.155 TEST_HEADER include/spdk/blob.h 00:07:09.155 TEST_HEADER include/spdk/conf.h 00:07:09.155 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:09.155 TEST_HEADER include/spdk/config.h 00:07:09.155 TEST_HEADER include/spdk/cpuset.h 00:07:09.155 TEST_HEADER include/spdk/crc16.h 00:07:09.155 TEST_HEADER include/spdk/crc32.h 00:07:09.155 TEST_HEADER include/spdk/crc64.h 00:07:09.155 TEST_HEADER include/spdk/dif.h 00:07:09.155 TEST_HEADER include/spdk/dma.h 00:07:09.155 TEST_HEADER include/spdk/endian.h 00:07:09.155 TEST_HEADER include/spdk/env_dpdk.h 00:07:09.155 TEST_HEADER include/spdk/env.h 00:07:09.155 TEST_HEADER include/spdk/event.h 00:07:09.155 TEST_HEADER include/spdk/fd_group.h 00:07:09.155 TEST_HEADER include/spdk/fd.h 00:07:09.155 TEST_HEADER include/spdk/file.h 00:07:09.155 TEST_HEADER include/spdk/fsdev.h 00:07:09.155 TEST_HEADER include/spdk/fsdev_module.h 00:07:09.155 TEST_HEADER include/spdk/ftl.h 00:07:09.155 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:09.155 CC examples/util/zipf/zipf.o 00:07:09.155 TEST_HEADER include/spdk/gpt_spec.h 00:07:09.155 TEST_HEADER include/spdk/hexlify.h 00:07:09.155 TEST_HEADER include/spdk/histogram_data.h 00:07:09.155 CC examples/ioat/perf/perf.o 00:07:09.155 CC test/thread/poller_perf/poller_perf.o 00:07:09.155 TEST_HEADER include/spdk/idxd.h 00:07:09.155 TEST_HEADER include/spdk/idxd_spec.h 00:07:09.155 TEST_HEADER include/spdk/init.h 00:07:09.155 TEST_HEADER include/spdk/ioat.h 00:07:09.155 TEST_HEADER include/spdk/ioat_spec.h 00:07:09.155 TEST_HEADER include/spdk/iscsi_spec.h 00:07:09.155 TEST_HEADER include/spdk/json.h 00:07:09.155 TEST_HEADER include/spdk/jsonrpc.h 00:07:09.155 TEST_HEADER include/spdk/keyring.h 00:07:09.155 TEST_HEADER include/spdk/keyring_module.h 00:07:09.155 TEST_HEADER include/spdk/likely.h 00:07:09.155 TEST_HEADER include/spdk/log.h 00:07:09.155 TEST_HEADER include/spdk/lvol.h 00:07:09.155 TEST_HEADER include/spdk/md5.h 00:07:09.155 TEST_HEADER include/spdk/memory.h 00:07:09.155 TEST_HEADER include/spdk/mmio.h 00:07:09.155 TEST_HEADER include/spdk/nbd.h 00:07:09.155 TEST_HEADER include/spdk/net.h 00:07:09.155 TEST_HEADER include/spdk/notify.h 00:07:09.155 TEST_HEADER include/spdk/nvme.h 00:07:09.155 TEST_HEADER include/spdk/nvme_intel.h 00:07:09.155 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:09.155 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:09.155 TEST_HEADER include/spdk/nvme_spec.h 00:07:09.155 CC test/app/bdev_svc/bdev_svc.o 00:07:09.412 TEST_HEADER include/spdk/nvme_zns.h 00:07:09.412 CC test/dma/test_dma/test_dma.o 00:07:09.412 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:09.412 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:09.412 TEST_HEADER include/spdk/nvmf.h 00:07:09.412 TEST_HEADER include/spdk/nvmf_spec.h 00:07:09.412 TEST_HEADER include/spdk/nvmf_transport.h 00:07:09.412 TEST_HEADER include/spdk/opal.h 00:07:09.412 TEST_HEADER include/spdk/opal_spec.h 00:07:09.412 TEST_HEADER include/spdk/pci_ids.h 00:07:09.412 TEST_HEADER include/spdk/pipe.h 00:07:09.412 TEST_HEADER include/spdk/queue.h 00:07:09.412 TEST_HEADER include/spdk/reduce.h 00:07:09.412 TEST_HEADER include/spdk/rpc.h 00:07:09.412 TEST_HEADER include/spdk/scheduler.h 00:07:09.412 TEST_HEADER include/spdk/scsi.h 00:07:09.412 TEST_HEADER include/spdk/scsi_spec.h 00:07:09.412 LINK rpc_client_test 00:07:09.412 TEST_HEADER include/spdk/sock.h 00:07:09.412 TEST_HEADER include/spdk/stdinc.h 00:07:09.412 TEST_HEADER include/spdk/string.h 00:07:09.412 TEST_HEADER include/spdk/thread.h 00:07:09.412 CC test/env/mem_callbacks/mem_callbacks.o 00:07:09.412 TEST_HEADER include/spdk/trace.h 00:07:09.412 TEST_HEADER include/spdk/trace_parser.h 00:07:09.412 TEST_HEADER include/spdk/tree.h 00:07:09.412 TEST_HEADER include/spdk/ublk.h 00:07:09.412 TEST_HEADER include/spdk/util.h 00:07:09.412 TEST_HEADER include/spdk/uuid.h 00:07:09.412 TEST_HEADER include/spdk/version.h 00:07:09.412 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:09.412 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:09.412 TEST_HEADER include/spdk/vhost.h 00:07:09.412 TEST_HEADER include/spdk/vmd.h 00:07:09.412 TEST_HEADER include/spdk/xor.h 00:07:09.412 TEST_HEADER include/spdk/zipf.h 00:07:09.412 CXX test/cpp_headers/accel.o 00:07:09.412 LINK interrupt_tgt 00:07:09.412 LINK zipf 00:07:09.412 LINK poller_perf 00:07:09.412 LINK ioat_perf 00:07:09.412 LINK bdev_svc 00:07:09.412 CXX test/cpp_headers/accel_module.o 00:07:09.412 LINK spdk_trace 00:07:09.412 CXX test/cpp_headers/assert.o 00:07:09.412 CXX test/cpp_headers/barrier.o 00:07:09.671 CC test/env/vtophys/vtophys.o 00:07:09.671 CC examples/ioat/verify/verify.o 00:07:09.671 CC app/trace_record/trace_record.o 00:07:09.671 CXX test/cpp_headers/base64.o 00:07:09.671 CXX test/cpp_headers/bdev.o 00:07:09.671 LINK vtophys 00:07:09.930 CC app/nvmf_tgt/nvmf_main.o 00:07:09.930 LINK verify 00:07:09.930 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:09.930 CC app/iscsi_tgt/iscsi_tgt.o 00:07:09.930 LINK test_dma 00:07:09.930 LINK mem_callbacks 00:07:09.930 CXX test/cpp_headers/bdev_module.o 00:07:09.930 LINK spdk_trace_record 00:07:09.930 CC app/spdk_lspci/spdk_lspci.o 00:07:09.930 CC app/spdk_tgt/spdk_tgt.o 00:07:09.930 LINK nvmf_tgt 00:07:09.930 CXX test/cpp_headers/bdev_zone.o 00:07:09.930 LINK iscsi_tgt 00:07:10.189 LINK spdk_lspci 00:07:10.189 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:10.189 CC app/spdk_nvme_perf/perf.o 00:07:10.189 CC test/env/memory/memory_ut.o 00:07:10.189 CC examples/thread/thread/thread_ex.o 00:07:10.189 CXX test/cpp_headers/bit_array.o 00:07:10.189 LINK spdk_tgt 00:07:10.189 LINK nvme_fuzz 00:07:10.189 CC test/env/pci/pci_ut.o 00:07:10.449 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:10.449 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:10.449 LINK env_dpdk_post_init 00:07:10.449 CXX test/cpp_headers/bit_pool.o 00:07:10.449 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:10.449 LINK thread 00:07:10.449 CXX test/cpp_headers/blob_bdev.o 00:07:10.449 CXX test/cpp_headers/blobfs_bdev.o 00:07:10.449 CXX test/cpp_headers/blobfs.o 00:07:10.449 CXX test/cpp_headers/blob.o 00:07:10.708 CXX test/cpp_headers/conf.o 00:07:10.708 CXX test/cpp_headers/config.o 00:07:10.708 LINK pci_ut 00:07:10.708 CC app/spdk_nvme_identify/identify.o 00:07:10.708 CC app/spdk_nvme_discover/discovery_aer.o 00:07:10.708 CC examples/sock/hello_world/hello_sock.o 00:07:10.708 CC app/spdk_top/spdk_top.o 00:07:10.708 CXX test/cpp_headers/cpuset.o 00:07:10.967 LINK vhost_fuzz 00:07:10.967 CXX test/cpp_headers/crc16.o 00:07:10.967 CXX test/cpp_headers/crc32.o 00:07:10.967 LINK spdk_nvme_discover 00:07:10.967 LINK hello_sock 00:07:10.967 LINK spdk_nvme_perf 00:07:11.228 CXX test/cpp_headers/crc64.o 00:07:11.228 CXX test/cpp_headers/dif.o 00:07:11.228 CC examples/vmd/lsvmd/lsvmd.o 00:07:11.228 CXX test/cpp_headers/dma.o 00:07:11.228 CC test/event/event_perf/event_perf.o 00:07:11.228 CC examples/idxd/perf/perf.o 00:07:11.488 CC test/event/reactor/reactor.o 00:07:11.488 LINK memory_ut 00:07:11.488 LINK lsvmd 00:07:11.488 CC app/vhost/vhost.o 00:07:11.488 CXX test/cpp_headers/endian.o 00:07:11.488 LINK event_perf 00:07:11.488 LINK reactor 00:07:11.488 LINK spdk_nvme_identify 00:07:11.488 LINK vhost 00:07:11.488 CXX test/cpp_headers/env_dpdk.o 00:07:11.749 CXX test/cpp_headers/env.o 00:07:11.749 CC test/event/reactor_perf/reactor_perf.o 00:07:11.749 CC examples/vmd/led/led.o 00:07:11.749 LINK idxd_perf 00:07:11.749 CXX test/cpp_headers/event.o 00:07:11.749 LINK reactor_perf 00:07:11.749 LINK spdk_top 00:07:11.749 LINK led 00:07:11.749 CC test/event/app_repeat/app_repeat.o 00:07:11.749 CC test/event/scheduler/scheduler.o 00:07:12.019 CXX test/cpp_headers/fd_group.o 00:07:12.019 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:12.019 CC examples/accel/perf/accel_perf.o 00:07:12.019 CC app/spdk_dd/spdk_dd.o 00:07:12.019 LINK app_repeat 00:07:12.019 LINK scheduler 00:07:12.019 CXX test/cpp_headers/fd.o 00:07:12.019 CC examples/blob/cli/blobcli.o 00:07:12.411 CC examples/blob/hello_world/hello_blob.o 00:07:12.411 CC examples/nvme/hello_world/hello_world.o 00:07:12.411 LINK hello_fsdev 00:07:12.411 CXX test/cpp_headers/file.o 00:07:12.411 LINK iscsi_fuzz 00:07:12.411 CC examples/nvme/reconnect/reconnect.o 00:07:12.411 LINK spdk_dd 00:07:12.411 CC test/nvme/aer/aer.o 00:07:12.411 LINK hello_world 00:07:12.411 LINK hello_blob 00:07:12.411 CXX test/cpp_headers/fsdev.o 00:07:12.411 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:12.411 LINK accel_perf 00:07:12.671 CXX test/cpp_headers/fsdev_module.o 00:07:12.671 CC test/app/histogram_perf/histogram_perf.o 00:07:12.671 CC test/app/jsoncat/jsoncat.o 00:07:12.671 LINK blobcli 00:07:12.671 CC examples/nvme/arbitration/arbitration.o 00:07:12.671 LINK reconnect 00:07:12.671 LINK aer 00:07:12.671 CC app/fio/nvme/fio_plugin.o 00:07:12.671 CC test/app/stub/stub.o 00:07:12.930 LINK histogram_perf 00:07:12.930 LINK jsoncat 00:07:12.930 CXX test/cpp_headers/ftl.o 00:07:12.930 CXX test/cpp_headers/fuse_dispatcher.o 00:07:12.930 CXX test/cpp_headers/gpt_spec.o 00:07:12.930 CXX test/cpp_headers/hexlify.o 00:07:12.930 CC test/nvme/reset/reset.o 00:07:12.930 LINK stub 00:07:13.190 LINK arbitration 00:07:13.190 LINK nvme_manage 00:07:13.190 CC examples/bdev/hello_world/hello_bdev.o 00:07:13.190 CC test/nvme/sgl/sgl.o 00:07:13.190 CXX test/cpp_headers/histogram_data.o 00:07:13.190 CXX test/cpp_headers/idxd.o 00:07:13.190 LINK reset 00:07:13.190 CC test/accel/dif/dif.o 00:07:13.190 CXX test/cpp_headers/idxd_spec.o 00:07:13.450 CC test/blobfs/mkfs/mkfs.o 00:07:13.450 CC examples/nvme/hotplug/hotplug.o 00:07:13.450 LINK hello_bdev 00:07:13.450 CC test/nvme/e2edp/nvme_dp.o 00:07:13.450 LINK sgl 00:07:13.450 LINK spdk_nvme 00:07:13.450 CXX test/cpp_headers/init.o 00:07:13.450 CC test/lvol/esnap/esnap.o 00:07:13.450 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:13.450 LINK mkfs 00:07:13.711 LINK hotplug 00:07:13.711 CXX test/cpp_headers/ioat.o 00:07:13.711 CC app/fio/bdev/fio_plugin.o 00:07:13.711 CC examples/bdev/bdevperf/bdevperf.o 00:07:13.711 CC test/nvme/overhead/overhead.o 00:07:13.711 LINK nvme_dp 00:07:13.711 LINK cmb_copy 00:07:13.711 CXX test/cpp_headers/ioat_spec.o 00:07:13.711 CC test/nvme/err_injection/err_injection.o 00:07:13.973 CC test/nvme/startup/startup.o 00:07:13.973 CXX test/cpp_headers/iscsi_spec.o 00:07:13.973 CC examples/nvme/abort/abort.o 00:07:13.973 LINK overhead 00:07:13.973 LINK startup 00:07:13.973 LINK err_injection 00:07:13.973 CC test/nvme/reserve/reserve.o 00:07:13.973 LINK dif 00:07:14.234 CXX test/cpp_headers/json.o 00:07:14.234 CXX test/cpp_headers/jsonrpc.o 00:07:14.234 LINK reserve 00:07:14.234 CC test/nvme/simple_copy/simple_copy.o 00:07:14.234 CXX test/cpp_headers/keyring.o 00:07:14.234 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:14.234 LINK spdk_bdev 00:07:14.234 CC test/nvme/connect_stress/connect_stress.o 00:07:14.234 CXX test/cpp_headers/keyring_module.o 00:07:14.494 CXX test/cpp_headers/likely.o 00:07:14.494 LINK pmr_persistence 00:07:14.494 CXX test/cpp_headers/log.o 00:07:14.494 CC test/nvme/boot_partition/boot_partition.o 00:07:14.494 LINK connect_stress 00:07:14.494 LINK simple_copy 00:07:14.494 LINK abort 00:07:14.494 CXX test/cpp_headers/lvol.o 00:07:14.754 LINK bdevperf 00:07:14.754 CC test/bdev/bdevio/bdevio.o 00:07:14.754 CXX test/cpp_headers/md5.o 00:07:14.754 CXX test/cpp_headers/memory.o 00:07:14.754 CXX test/cpp_headers/mmio.o 00:07:14.754 LINK boot_partition 00:07:14.754 CC test/nvme/compliance/nvme_compliance.o 00:07:14.754 CC test/nvme/fused_ordering/fused_ordering.o 00:07:14.754 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:15.014 CXX test/cpp_headers/nbd.o 00:07:15.014 CXX test/cpp_headers/net.o 00:07:15.014 CXX test/cpp_headers/notify.o 00:07:15.014 CXX test/cpp_headers/nvme.o 00:07:15.014 CC test/nvme/fdp/fdp.o 00:07:15.014 LINK fused_ordering 00:07:15.014 LINK doorbell_aers 00:07:15.014 CXX test/cpp_headers/nvme_intel.o 00:07:15.014 CXX test/cpp_headers/nvme_ocssd.o 00:07:15.014 CC examples/nvmf/nvmf/nvmf.o 00:07:15.014 LINK bdevio 00:07:15.014 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:15.275 LINK nvme_compliance 00:07:15.275 CXX test/cpp_headers/nvme_spec.o 00:07:15.276 CXX test/cpp_headers/nvme_zns.o 00:07:15.276 CC test/nvme/cuse/cuse.o 00:07:15.276 CXX test/cpp_headers/nvmf_cmd.o 00:07:15.276 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:15.276 CXX test/cpp_headers/nvmf.o 00:07:15.276 CXX test/cpp_headers/nvmf_spec.o 00:07:15.276 LINK fdp 00:07:15.535 CXX test/cpp_headers/nvmf_transport.o 00:07:15.535 LINK nvmf 00:07:15.535 CXX test/cpp_headers/opal.o 00:07:15.535 CXX test/cpp_headers/opal_spec.o 00:07:15.535 CXX test/cpp_headers/pci_ids.o 00:07:15.535 CXX test/cpp_headers/pipe.o 00:07:15.535 CXX test/cpp_headers/queue.o 00:07:15.535 CXX test/cpp_headers/reduce.o 00:07:15.535 CXX test/cpp_headers/rpc.o 00:07:15.535 CXX test/cpp_headers/scheduler.o 00:07:15.535 CXX test/cpp_headers/scsi.o 00:07:15.535 CXX test/cpp_headers/scsi_spec.o 00:07:15.535 CXX test/cpp_headers/sock.o 00:07:15.795 CXX test/cpp_headers/stdinc.o 00:07:15.795 CXX test/cpp_headers/string.o 00:07:15.795 CXX test/cpp_headers/thread.o 00:07:15.795 CXX test/cpp_headers/trace.o 00:07:15.795 CXX test/cpp_headers/trace_parser.o 00:07:15.795 CXX test/cpp_headers/tree.o 00:07:15.795 CXX test/cpp_headers/ublk.o 00:07:15.795 CXX test/cpp_headers/util.o 00:07:15.795 CXX test/cpp_headers/uuid.o 00:07:15.795 CXX test/cpp_headers/version.o 00:07:15.795 CXX test/cpp_headers/vfio_user_pci.o 00:07:15.795 CXX test/cpp_headers/vfio_user_spec.o 00:07:15.795 CXX test/cpp_headers/vhost.o 00:07:16.074 CXX test/cpp_headers/vmd.o 00:07:16.074 CXX test/cpp_headers/xor.o 00:07:16.074 CXX test/cpp_headers/zipf.o 00:07:17.019 LINK cuse 00:07:19.569 LINK esnap 00:07:20.141 00:07:20.141 real 1m12.161s 00:07:20.141 user 6m37.001s 00:07:20.141 sys 1m14.887s 00:07:20.141 ************************************ 00:07:20.141 END TEST make 00:07:20.141 ************************************ 00:07:20.141 13:27:19 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:20.141 13:27:19 make -- common/autotest_common.sh@10 -- $ set +x 00:07:20.141 13:27:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:20.141 13:27:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:20.141 13:27:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:20.141 13:27:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:20.141 13:27:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:20.141 13:27:19 -- pm/common@44 -- $ pid=5074 00:07:20.142 13:27:19 -- pm/common@50 -- $ kill -TERM 5074 00:07:20.142 13:27:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:20.142 13:27:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:20.142 13:27:19 -- pm/common@44 -- $ pid=5075 00:07:20.142 13:27:19 -- pm/common@50 -- $ kill -TERM 5075 00:07:20.142 13:27:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:20.142 13:27:19 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:20.142 13:27:19 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.142 13:27:19 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.142 13:27:19 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.403 13:27:19 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.403 13:27:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.403 13:27:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.403 13:27:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.403 13:27:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.403 13:27:19 -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.403 13:27:19 -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.403 13:27:19 -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.403 13:27:19 -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.403 13:27:19 -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.403 13:27:19 -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.403 13:27:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.403 13:27:19 -- scripts/common.sh@344 -- # case "$op" in 00:07:20.403 13:27:19 -- scripts/common.sh@345 -- # : 1 00:07:20.403 13:27:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.403 13:27:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.403 13:27:19 -- scripts/common.sh@365 -- # decimal 1 00:07:20.403 13:27:19 -- scripts/common.sh@353 -- # local d=1 00:07:20.403 13:27:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.403 13:27:19 -- scripts/common.sh@355 -- # echo 1 00:07:20.403 13:27:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.403 13:27:19 -- scripts/common.sh@366 -- # decimal 2 00:07:20.403 13:27:19 -- scripts/common.sh@353 -- # local d=2 00:07:20.403 13:27:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.403 13:27:19 -- scripts/common.sh@355 -- # echo 2 00:07:20.403 13:27:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.403 13:27:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.403 13:27:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.403 13:27:19 -- scripts/common.sh@368 -- # return 0 00:07:20.403 13:27:19 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.403 13:27:19 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.403 --rc genhtml_branch_coverage=1 00:07:20.403 --rc genhtml_function_coverage=1 00:07:20.403 --rc genhtml_legend=1 00:07:20.403 --rc geninfo_all_blocks=1 00:07:20.403 --rc geninfo_unexecuted_blocks=1 00:07:20.403 00:07:20.403 ' 00:07:20.403 13:27:19 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.403 --rc genhtml_branch_coverage=1 00:07:20.403 --rc genhtml_function_coverage=1 00:07:20.403 --rc genhtml_legend=1 00:07:20.403 --rc geninfo_all_blocks=1 00:07:20.403 --rc geninfo_unexecuted_blocks=1 00:07:20.403 00:07:20.403 ' 00:07:20.403 13:27:19 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.403 --rc genhtml_branch_coverage=1 00:07:20.403 --rc genhtml_function_coverage=1 00:07:20.403 --rc genhtml_legend=1 00:07:20.403 --rc geninfo_all_blocks=1 00:07:20.403 --rc geninfo_unexecuted_blocks=1 00:07:20.403 00:07:20.403 ' 00:07:20.403 13:27:19 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.403 --rc genhtml_branch_coverage=1 00:07:20.403 --rc genhtml_function_coverage=1 00:07:20.403 --rc genhtml_legend=1 00:07:20.403 --rc geninfo_all_blocks=1 00:07:20.403 --rc geninfo_unexecuted_blocks=1 00:07:20.403 00:07:20.403 ' 00:07:20.403 13:27:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.403 13:27:19 -- nvmf/common.sh@7 -- # uname -s 00:07:20.403 13:27:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.403 13:27:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.403 13:27:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.403 13:27:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.403 13:27:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.403 13:27:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.403 13:27:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.403 13:27:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.403 13:27:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.403 13:27:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.403 13:27:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:304320ee-eac1-42a3-9c03-847f5b09ca5b 00:07:20.403 13:27:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=304320ee-eac1-42a3-9c03-847f5b09ca5b 00:07:20.403 13:27:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.403 13:27:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.403 13:27:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:20.403 13:27:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.403 13:27:19 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.403 13:27:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.403 13:27:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.403 13:27:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.403 13:27:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.403 13:27:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.403 13:27:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.403 13:27:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.403 13:27:19 -- paths/export.sh@5 -- # export PATH 00:07:20.403 13:27:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.403 13:27:19 -- nvmf/common.sh@51 -- # : 0 00:07:20.403 13:27:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.403 13:27:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.403 13:27:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.403 13:27:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.403 13:27:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.403 13:27:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.403 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.403 13:27:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.403 13:27:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.403 13:27:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.403 13:27:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:20.403 13:27:19 -- spdk/autotest.sh@32 -- # uname -s 00:07:20.403 13:27:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:20.403 13:27:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:20.403 13:27:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:20.403 13:27:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:20.403 13:27:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:20.403 13:27:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:20.403 13:27:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:20.403 13:27:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:20.403 13:27:19 -- spdk/autotest.sh@48 -- # udevadm_pid=54311 00:07:20.403 13:27:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:20.403 13:27:19 -- pm/common@17 -- # local monitor 00:07:20.403 13:27:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:20.403 13:27:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:20.403 13:27:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:20.403 13:27:19 -- pm/common@25 -- # sleep 1 00:07:20.403 13:27:19 -- pm/common@21 -- # date +%s 00:07:20.403 13:27:19 -- pm/common@21 -- # date +%s 00:07:20.403 13:27:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109239 00:07:20.403 13:27:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109239 00:07:20.403 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109239_collect-cpu-load.pm.log 00:07:20.403 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109239_collect-vmstat.pm.log 00:07:21.366 13:27:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:21.366 13:27:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:21.366 13:27:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.366 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:21.366 13:27:20 -- spdk/autotest.sh@59 -- # create_test_list 00:07:21.366 13:27:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:21.366 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:21.367 13:27:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:21.367 13:27:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:21.367 13:27:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:21.367 13:27:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:21.367 13:27:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:21.367 13:27:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:21.367 13:27:20 -- common/autotest_common.sh@1457 -- # uname 00:07:21.367 13:27:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:21.367 13:27:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:21.367 13:27:20 -- common/autotest_common.sh@1477 -- # uname 00:07:21.367 13:27:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:21.367 13:27:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:21.367 13:27:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:21.626 lcov: LCOV version 1.15 00:07:21.626 13:27:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:39.756 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:39.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:54.816 13:27:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:54.816 13:27:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.816 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 13:27:52 -- spdk/autotest.sh@78 -- # rm -f 00:07:54.816 13:27:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:54.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.816 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:54.816 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:54.816 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:54.816 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:54.816 13:27:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:54.816 13:27:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:54.816 13:27:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:54.817 13:27:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:54.817 13:27:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:54.817 13:27:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:54.817 13:27:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.817 13:27:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:54.817 13:27:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:54.817 13:27:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:54.817 13:27:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:54.817 13:27:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:54.817 13:27:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:54.817 No valid GPT data, bailing 00:07:54.817 13:27:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:54.817 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:54.817 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:54.817 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:54.817 1+0 records in 00:07:54.817 1+0 records out 00:07:54.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281237 s, 37.3 MB/s 00:07:54.817 13:27:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:54.817 13:27:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:54.817 13:27:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:54.817 13:27:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:54.817 13:27:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:54.817 No valid GPT data, bailing 00:07:54.817 13:27:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:54.817 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:54.817 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:54.817 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:54.817 1+0 records in 00:07:54.817 1+0 records out 00:07:54.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671505 s, 156 MB/s 00:07:54.817 13:27:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:54.817 13:27:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:54.817 13:27:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:54.817 13:27:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:54.817 13:27:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:54.817 No valid GPT data, bailing 00:07:54.817 13:27:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:54.817 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:54.817 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:54.817 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:54.817 1+0 records in 00:07:54.817 1+0 records out 00:07:54.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633149 s, 166 MB/s 00:07:54.817 13:27:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:54.817 13:27:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:54.817 13:27:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:54.817 13:27:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:54.817 13:27:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:55.078 No valid GPT data, bailing 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:55.078 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:55.078 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:55.078 1+0 records in 00:07:55.078 1+0 records out 00:07:55.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490165 s, 214 MB/s 00:07:55.078 13:27:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:55.078 13:27:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:55.078 13:27:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:55.078 13:27:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:55.078 13:27:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:55.078 No valid GPT data, bailing 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:55.078 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:55.078 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:55.078 1+0 records in 00:07:55.078 1+0 records out 00:07:55.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573604 s, 183 MB/s 00:07:55.078 13:27:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:55.078 13:27:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:55.078 13:27:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:55.078 13:27:54 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:55.078 13:27:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:55.078 No valid GPT data, bailing 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:55.078 13:27:54 -- scripts/common.sh@394 -- # pt= 00:07:55.078 13:27:54 -- scripts/common.sh@395 -- # return 1 00:07:55.078 13:27:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:55.078 1+0 records in 00:07:55.078 1+0 records out 00:07:55.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501162 s, 209 MB/s 00:07:55.078 13:27:54 -- spdk/autotest.sh@105 -- # sync 00:07:55.339 13:27:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:55.339 13:27:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:55.339 13:27:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:57.250 13:27:56 -- spdk/autotest.sh@111 -- # uname -s 00:07:57.250 13:27:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:57.250 13:27:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:57.250 13:27:56 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:57.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:58.080 Hugepages 00:07:58.080 node hugesize free / total 00:07:58.080 node0 1048576kB 0 / 0 00:07:58.080 node0 2048kB 0 / 0 00:07:58.080 00:07:58.080 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:58.080 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:58.080 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:58.340 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:58.340 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:58.340 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:58.340 13:27:57 -- spdk/autotest.sh@117 -- # uname -s 00:07:58.340 13:27:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:58.340 13:27:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:58.340 13:27:57 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:58.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.482 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.482 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.482 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.742 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.742 13:27:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:00.683 13:27:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:00.683 13:27:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:00.683 13:27:59 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.683 13:27:59 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:00.683 13:27:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:00.683 13:27:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:00.683 13:27:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.683 13:27:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.683 13:27:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:00.683 13:28:00 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:00.683 13:28:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.683 13:28:00 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:01.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.252 Waiting for block devices as requested 00:08:01.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.512 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.512 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.512 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.796 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:06.796 13:28:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:06.796 13:28:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1543 -- # continue 00:08:06.796 13:28:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:06.796 13:28:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1543 -- # continue 00:08:06.796 13:28:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:06.796 13:28:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1543 -- # continue 00:08:06.796 13:28:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:08:06.796 13:28:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:06.796 13:28:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:06.796 13:28:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:06.796 13:28:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:06.797 13:28:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:06.797 13:28:06 -- common/autotest_common.sh@1543 -- # continue 00:08:06.797 13:28:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:06.797 13:28:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.797 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 13:28:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:06.797 13:28:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.797 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.797 13:28:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:07.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:07.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:07.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.249 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.249 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.249 13:28:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:08.249 13:28:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.249 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.249 13:28:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:08.249 13:28:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:08.249 13:28:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:08.249 13:28:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:08.249 13:28:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:08.249 13:28:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:08.249 13:28:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:08.249 13:28:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:08.249 13:28:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:08.249 13:28:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:08.249 13:28:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.249 13:28:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:08.249 13:28:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:08.249 13:28:07 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:08.249 13:28:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:08.249 13:28:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:08.249 13:28:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:08.249 13:28:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:08.249 13:28:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:08.249 13:28:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:08.249 13:28:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:08.249 13:28:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:08.249 13:28:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:08.249 13:28:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:08.510 13:28:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:08.510 13:28:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:08.510 13:28:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:08.510 13:28:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:08.510 13:28:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:08.510 13:28:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:08.510 13:28:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:08.510 13:28:07 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:08.510 13:28:07 -- common/autotest_common.sh@1572 -- # return 0 00:08:08.510 13:28:07 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:08.510 13:28:07 -- common/autotest_common.sh@1580 -- # return 0 00:08:08.510 13:28:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:08.510 13:28:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:08.510 13:28:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:08.510 13:28:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:08.510 13:28:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:08.510 13:28:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.510 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.510 13:28:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:08.511 13:28:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:08.511 13:28:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.511 13:28:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.511 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.511 ************************************ 00:08:08.511 START TEST env 00:08:08.511 ************************************ 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:08.511 * Looking for test storage... 00:08:08.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.511 13:28:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.511 13:28:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.511 13:28:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.511 13:28:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.511 13:28:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.511 13:28:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.511 13:28:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.511 13:28:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.511 13:28:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.511 13:28:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.511 13:28:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.511 13:28:07 env -- scripts/common.sh@344 -- # case "$op" in 00:08:08.511 13:28:07 env -- scripts/common.sh@345 -- # : 1 00:08:08.511 13:28:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.511 13:28:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.511 13:28:07 env -- scripts/common.sh@365 -- # decimal 1 00:08:08.511 13:28:07 env -- scripts/common.sh@353 -- # local d=1 00:08:08.511 13:28:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.511 13:28:07 env -- scripts/common.sh@355 -- # echo 1 00:08:08.511 13:28:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.511 13:28:07 env -- scripts/common.sh@366 -- # decimal 2 00:08:08.511 13:28:07 env -- scripts/common.sh@353 -- # local d=2 00:08:08.511 13:28:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.511 13:28:07 env -- scripts/common.sh@355 -- # echo 2 00:08:08.511 13:28:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.511 13:28:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.511 13:28:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.511 13:28:07 env -- scripts/common.sh@368 -- # return 0 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.511 --rc genhtml_branch_coverage=1 00:08:08.511 --rc genhtml_function_coverage=1 00:08:08.511 --rc genhtml_legend=1 00:08:08.511 --rc geninfo_all_blocks=1 00:08:08.511 --rc geninfo_unexecuted_blocks=1 00:08:08.511 00:08:08.511 ' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.511 --rc genhtml_branch_coverage=1 00:08:08.511 --rc genhtml_function_coverage=1 00:08:08.511 --rc genhtml_legend=1 00:08:08.511 --rc geninfo_all_blocks=1 00:08:08.511 --rc geninfo_unexecuted_blocks=1 00:08:08.511 00:08:08.511 ' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.511 --rc genhtml_branch_coverage=1 00:08:08.511 --rc genhtml_function_coverage=1 00:08:08.511 --rc genhtml_legend=1 00:08:08.511 --rc geninfo_all_blocks=1 00:08:08.511 --rc geninfo_unexecuted_blocks=1 00:08:08.511 00:08:08.511 ' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.511 --rc genhtml_branch_coverage=1 00:08:08.511 --rc genhtml_function_coverage=1 00:08:08.511 --rc genhtml_legend=1 00:08:08.511 --rc geninfo_all_blocks=1 00:08:08.511 --rc geninfo_unexecuted_blocks=1 00:08:08.511 00:08:08.511 ' 00:08:08.511 13:28:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.511 13:28:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.511 13:28:07 env -- common/autotest_common.sh@10 -- # set +x 00:08:08.511 ************************************ 00:08:08.511 START TEST env_memory 00:08:08.511 ************************************ 00:08:08.511 13:28:07 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:08.511 00:08:08.511 00:08:08.511 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.511 http://cunit.sourceforge.net/ 00:08:08.511 00:08:08.511 00:08:08.511 Suite: memory 00:08:08.772 Test: alloc and free memory map ...[2024-11-20 13:28:07.965539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:08.772 passed 00:08:08.772 Test: mem map translation ...[2024-11-20 13:28:08.008406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:08.772 [2024-11-20 13:28:08.008606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:08.772 [2024-11-20 13:28:08.008742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:08.772 [2024-11-20 13:28:08.008786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:08.772 passed 00:08:08.772 Test: mem map registration ...[2024-11-20 13:28:08.077770] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:08.772 [2024-11-20 13:28:08.077962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:08.772 passed 00:08:08.772 Test: mem map adjacent registrations ...passed 00:08:08.772 00:08:08.772 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.772 suites 1 1 n/a 0 0 00:08:08.772 tests 4 4 4 0 0 00:08:08.772 asserts 152 152 152 0 n/a 00:08:08.772 00:08:08.772 Elapsed time = 0.240 seconds 00:08:08.772 ************************************ 00:08:08.772 END TEST env_memory 00:08:08.772 ************************************ 00:08:08.772 00:08:08.772 real 0m0.273s 00:08:08.772 user 0m0.236s 00:08:08.772 sys 0m0.026s 00:08:08.772 13:28:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.772 13:28:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:09.034 13:28:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:09.034 13:28:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.034 13:28:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.034 13:28:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:09.034 ************************************ 00:08:09.034 START TEST env_vtophys 00:08:09.034 ************************************ 00:08:09.034 13:28:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:09.034 EAL: lib.eal log level changed from notice to debug 00:08:09.034 EAL: Detected lcore 0 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 1 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 2 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 3 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 4 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 5 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 6 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 7 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 8 as core 0 on socket 0 00:08:09.034 EAL: Detected lcore 9 as core 0 on socket 0 00:08:09.034 EAL: Maximum logical cores by configuration: 128 00:08:09.034 EAL: Detected CPU lcores: 10 00:08:09.034 EAL: Detected NUMA nodes: 1 00:08:09.034 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:09.034 EAL: Detected shared linkage of DPDK 00:08:09.034 EAL: No shared files mode enabled, IPC will be disabled 00:08:09.034 EAL: Selected IOVA mode 'PA' 00:08:09.034 EAL: Probing VFIO support... 00:08:09.034 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:09.034 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:09.034 EAL: Ask a virtual area of 0x2e000 bytes 00:08:09.034 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:09.034 EAL: Setting up physically contiguous memory... 00:08:09.034 EAL: Setting maximum number of open files to 524288 00:08:09.034 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:09.034 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:09.034 EAL: Ask a virtual area of 0x61000 bytes 00:08:09.034 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:09.034 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:09.034 EAL: Ask a virtual area of 0x400000000 bytes 00:08:09.034 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:09.034 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:09.034 EAL: Ask a virtual area of 0x61000 bytes 00:08:09.034 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:09.034 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:09.034 EAL: Ask a virtual area of 0x400000000 bytes 00:08:09.034 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:09.034 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:09.034 EAL: Ask a virtual area of 0x61000 bytes 00:08:09.034 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:09.034 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:09.034 EAL: Ask a virtual area of 0x400000000 bytes 00:08:09.035 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:09.035 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:09.035 EAL: Ask a virtual area of 0x61000 bytes 00:08:09.035 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:09.035 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:09.035 EAL: Ask a virtual area of 0x400000000 bytes 00:08:09.035 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:09.035 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:09.035 EAL: Hugepages will be freed exactly as allocated. 00:08:09.035 EAL: No shared files mode enabled, IPC is disabled 00:08:09.035 EAL: No shared files mode enabled, IPC is disabled 00:08:09.035 EAL: TSC frequency is ~2600000 KHz 00:08:09.035 EAL: Main lcore 0 is ready (tid=7f2cfa71ea40;cpuset=[0]) 00:08:09.035 EAL: Trying to obtain current memory policy. 00:08:09.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.035 EAL: Restoring previous memory policy: 0 00:08:09.035 EAL: request: mp_malloc_sync 00:08:09.035 EAL: No shared files mode enabled, IPC is disabled 00:08:09.035 EAL: Heap on socket 0 was expanded by 2MB 00:08:09.035 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:09.035 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:09.035 EAL: Mem event callback 'spdk:(nil)' registered 00:08:09.035 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:09.296 00:08:09.296 00:08:09.296 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.296 http://cunit.sourceforge.net/ 00:08:09.296 00:08:09.296 00:08:09.296 Suite: components_suite 00:08:09.556 Test: vtophys_malloc_test ...passed 00:08:09.556 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.556 EAL: Restoring previous memory policy: 4 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was expanded by 4MB 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was shrunk by 4MB 00:08:09.556 EAL: Trying to obtain current memory policy. 00:08:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.556 EAL: Restoring previous memory policy: 4 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was expanded by 6MB 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was shrunk by 6MB 00:08:09.556 EAL: Trying to obtain current memory policy. 00:08:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.556 EAL: Restoring previous memory policy: 4 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was expanded by 10MB 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was shrunk by 10MB 00:08:09.556 EAL: Trying to obtain current memory policy. 00:08:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.556 EAL: Restoring previous memory policy: 4 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was expanded by 18MB 00:08:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.556 EAL: request: mp_malloc_sync 00:08:09.556 EAL: No shared files mode enabled, IPC is disabled 00:08:09.556 EAL: Heap on socket 0 was shrunk by 18MB 00:08:09.815 EAL: Trying to obtain current memory policy. 00:08:09.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.815 EAL: Restoring previous memory policy: 4 00:08:09.815 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.815 EAL: request: mp_malloc_sync 00:08:09.815 EAL: No shared files mode enabled, IPC is disabled 00:08:09.815 EAL: Heap on socket 0 was expanded by 34MB 00:08:09.815 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.815 EAL: request: mp_malloc_sync 00:08:09.816 EAL: No shared files mode enabled, IPC is disabled 00:08:09.816 EAL: Heap on socket 0 was shrunk by 34MB 00:08:09.816 EAL: Trying to obtain current memory policy. 00:08:09.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.816 EAL: Restoring previous memory policy: 4 00:08:09.816 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.816 EAL: request: mp_malloc_sync 00:08:09.816 EAL: No shared files mode enabled, IPC is disabled 00:08:09.816 EAL: Heap on socket 0 was expanded by 66MB 00:08:09.816 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.816 EAL: request: mp_malloc_sync 00:08:09.816 EAL: No shared files mode enabled, IPC is disabled 00:08:09.816 EAL: Heap on socket 0 was shrunk by 66MB 00:08:10.073 EAL: Trying to obtain current memory policy. 00:08:10.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.073 EAL: Restoring previous memory policy: 4 00:08:10.073 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.073 EAL: request: mp_malloc_sync 00:08:10.073 EAL: No shared files mode enabled, IPC is disabled 00:08:10.073 EAL: Heap on socket 0 was expanded by 130MB 00:08:10.073 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.073 EAL: request: mp_malloc_sync 00:08:10.073 EAL: No shared files mode enabled, IPC is disabled 00:08:10.073 EAL: Heap on socket 0 was shrunk by 130MB 00:08:10.332 EAL: Trying to obtain current memory policy. 00:08:10.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.332 EAL: Restoring previous memory policy: 4 00:08:10.332 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.332 EAL: request: mp_malloc_sync 00:08:10.332 EAL: No shared files mode enabled, IPC is disabled 00:08:10.332 EAL: Heap on socket 0 was expanded by 258MB 00:08:10.590 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.850 EAL: request: mp_malloc_sync 00:08:10.850 EAL: No shared files mode enabled, IPC is disabled 00:08:10.850 EAL: Heap on socket 0 was shrunk by 258MB 00:08:11.110 EAL: Trying to obtain current memory policy. 00:08:11.110 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.110 EAL: Restoring previous memory policy: 4 00:08:11.110 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.110 EAL: request: mp_malloc_sync 00:08:11.110 EAL: No shared files mode enabled, IPC is disabled 00:08:11.110 EAL: Heap on socket 0 was expanded by 514MB 00:08:11.681 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.941 EAL: request: mp_malloc_sync 00:08:11.941 EAL: No shared files mode enabled, IPC is disabled 00:08:11.941 EAL: Heap on socket 0 was shrunk by 514MB 00:08:12.511 EAL: Trying to obtain current memory policy. 00:08:12.511 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:12.770 EAL: Restoring previous memory policy: 4 00:08:12.770 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.770 EAL: request: mp_malloc_sync 00:08:12.770 EAL: No shared files mode enabled, IPC is disabled 00:08:12.770 EAL: Heap on socket 0 was expanded by 1026MB 00:08:14.148 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.149 EAL: request: mp_malloc_sync 00:08:14.149 EAL: No shared files mode enabled, IPC is disabled 00:08:14.149 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:15.531 passed 00:08:15.531 00:08:15.531 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.531 suites 1 1 n/a 0 0 00:08:15.531 tests 2 2 2 0 0 00:08:15.531 asserts 5796 5796 5796 0 n/a 00:08:15.531 00:08:15.531 Elapsed time = 6.057 seconds 00:08:15.531 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.531 EAL: request: mp_malloc_sync 00:08:15.531 EAL: No shared files mode enabled, IPC is disabled 00:08:15.532 EAL: Heap on socket 0 was shrunk by 2MB 00:08:15.532 EAL: No shared files mode enabled, IPC is disabled 00:08:15.532 EAL: No shared files mode enabled, IPC is disabled 00:08:15.532 EAL: No shared files mode enabled, IPC is disabled 00:08:15.532 00:08:15.532 real 0m6.409s 00:08:15.532 user 0m5.141s 00:08:15.532 sys 0m1.077s 00:08:15.532 ************************************ 00:08:15.532 END TEST env_vtophys 00:08:15.532 ************************************ 00:08:15.532 13:28:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.532 13:28:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:15.532 13:28:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.532 13:28:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.532 13:28:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.532 13:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.532 ************************************ 00:08:15.532 START TEST env_pci 00:08:15.532 ************************************ 00:08:15.532 13:28:14 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.532 00:08:15.532 00:08:15.532 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.532 http://cunit.sourceforge.net/ 00:08:15.532 00:08:15.532 00:08:15.532 Suite: pci 00:08:15.532 Test: pci_hook ...[2024-11-20 13:28:14.757366] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57126 has claimed it 00:08:15.532 passed 00:08:15.532 00:08:15.532 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.532 suites 1 1 n/a 0 0 00:08:15.532 tests 1 1 1 0 0 00:08:15.532 asserts 25 25 25 0 n/a 00:08:15.532 00:08:15.532 Elapsed time = 0.009 seconds 00:08:15.532 EAL: Cannot find device (10000:00:01.0) 00:08:15.532 EAL: Failed to attach device on primary process 00:08:15.532 00:08:15.532 real 0m0.071s 00:08:15.532 user 0m0.030s 00:08:15.532 sys 0m0.038s 00:08:15.532 ************************************ 00:08:15.532 END TEST env_pci 00:08:15.532 ************************************ 00:08:15.532 13:28:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.532 13:28:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:15.532 13:28:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:15.532 13:28:14 env -- env/env.sh@15 -- # uname 00:08:15.532 13:28:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:15.532 13:28:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:15.532 13:28:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.532 13:28:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.532 13:28:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.532 13:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.532 ************************************ 00:08:15.532 START TEST env_dpdk_post_init 00:08:15.532 ************************************ 00:08:15.532 13:28:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.532 EAL: Detected CPU lcores: 10 00:08:15.532 EAL: Detected NUMA nodes: 1 00:08:15.532 EAL: Detected shared linkage of DPDK 00:08:15.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:15.532 EAL: Selected IOVA mode 'PA' 00:08:15.793 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:15.793 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:15.793 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:15.793 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:15.793 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:15.793 Starting DPDK initialization... 00:08:15.793 Starting SPDK post initialization... 00:08:15.793 SPDK NVMe probe 00:08:15.793 Attaching to 0000:00:10.0 00:08:15.793 Attaching to 0000:00:11.0 00:08:15.793 Attaching to 0000:00:12.0 00:08:15.793 Attaching to 0000:00:13.0 00:08:15.793 Attached to 0000:00:10.0 00:08:15.793 Attached to 0000:00:11.0 00:08:15.793 Attached to 0000:00:13.0 00:08:15.793 Attached to 0000:00:12.0 00:08:15.793 Cleaning up... 00:08:15.793 00:08:15.793 real 0m0.275s 00:08:15.793 user 0m0.094s 00:08:15.793 sys 0m0.080s 00:08:15.793 ************************************ 00:08:15.793 END TEST env_dpdk_post_init 00:08:15.793 ************************************ 00:08:15.793 13:28:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.793 13:28:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:15.793 13:28:15 env -- env/env.sh@26 -- # uname 00:08:15.793 13:28:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:15.793 13:28:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:15.793 13:28:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.793 13:28:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.793 13:28:15 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.793 ************************************ 00:08:15.793 START TEST env_mem_callbacks 00:08:15.793 ************************************ 00:08:15.793 13:28:15 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:16.052 EAL: Detected CPU lcores: 10 00:08:16.052 EAL: Detected NUMA nodes: 1 00:08:16.052 EAL: Detected shared linkage of DPDK 00:08:16.052 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:16.052 EAL: Selected IOVA mode 'PA' 00:08:16.052 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:16.052 00:08:16.052 00:08:16.052 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.052 http://cunit.sourceforge.net/ 00:08:16.052 00:08:16.052 00:08:16.052 Suite: memory 00:08:16.052 Test: test ... 00:08:16.052 register 0x200000200000 2097152 00:08:16.052 malloc 3145728 00:08:16.052 register 0x200000400000 4194304 00:08:16.052 buf 0x2000004fffc0 len 3145728 PASSED 00:08:16.052 malloc 64 00:08:16.052 buf 0x2000004ffec0 len 64 PASSED 00:08:16.052 malloc 4194304 00:08:16.052 register 0x200000800000 6291456 00:08:16.052 buf 0x2000009fffc0 len 4194304 PASSED 00:08:16.052 free 0x2000004fffc0 3145728 00:08:16.052 free 0x2000004ffec0 64 00:08:16.052 unregister 0x200000400000 4194304 PASSED 00:08:16.052 free 0x2000009fffc0 4194304 00:08:16.052 unregister 0x200000800000 6291456 PASSED 00:08:16.052 malloc 8388608 00:08:16.052 register 0x200000400000 10485760 00:08:16.052 buf 0x2000005fffc0 len 8388608 PASSED 00:08:16.052 free 0x2000005fffc0 8388608 00:08:16.052 unregister 0x200000400000 10485760 PASSED 00:08:16.052 passed 00:08:16.052 00:08:16.052 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.052 suites 1 1 n/a 0 0 00:08:16.052 tests 1 1 1 0 0 00:08:16.052 asserts 15 15 15 0 n/a 00:08:16.052 00:08:16.052 Elapsed time = 0.046 seconds 00:08:16.052 00:08:16.052 real 0m0.245s 00:08:16.052 user 0m0.069s 00:08:16.052 sys 0m0.069s 00:08:16.052 13:28:15 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.052 13:28:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:16.052 ************************************ 00:08:16.052 END TEST env_mem_callbacks 00:08:16.052 ************************************ 00:08:16.312 ************************************ 00:08:16.312 END TEST env 00:08:16.312 ************************************ 00:08:16.312 00:08:16.312 real 0m7.772s 00:08:16.312 user 0m5.733s 00:08:16.312 sys 0m1.529s 00:08:16.312 13:28:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.312 13:28:15 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.312 13:28:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.312 13:28:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.312 13:28:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.312 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:16.312 ************************************ 00:08:16.312 START TEST rpc 00:08:16.312 ************************************ 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.312 * Looking for test storage... 00:08:16.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.312 13:28:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.312 13:28:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.312 13:28:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.312 13:28:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.312 13:28:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.312 13:28:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:16.312 13:28:15 rpc -- scripts/common.sh@345 -- # : 1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.312 13:28:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.312 13:28:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@353 -- # local d=1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.312 13:28:15 rpc -- scripts/common.sh@355 -- # echo 1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.312 13:28:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@353 -- # local d=2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.312 13:28:15 rpc -- scripts/common.sh@355 -- # echo 2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.312 13:28:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.312 13:28:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.312 13:28:15 rpc -- scripts/common.sh@368 -- # return 0 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.312 --rc genhtml_branch_coverage=1 00:08:16.312 --rc genhtml_function_coverage=1 00:08:16.312 --rc genhtml_legend=1 00:08:16.312 --rc geninfo_all_blocks=1 00:08:16.312 --rc geninfo_unexecuted_blocks=1 00:08:16.312 00:08:16.312 ' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.312 --rc genhtml_branch_coverage=1 00:08:16.312 --rc genhtml_function_coverage=1 00:08:16.312 --rc genhtml_legend=1 00:08:16.312 --rc geninfo_all_blocks=1 00:08:16.312 --rc geninfo_unexecuted_blocks=1 00:08:16.312 00:08:16.312 ' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.312 --rc genhtml_branch_coverage=1 00:08:16.312 --rc genhtml_function_coverage=1 00:08:16.312 --rc genhtml_legend=1 00:08:16.312 --rc geninfo_all_blocks=1 00:08:16.312 --rc geninfo_unexecuted_blocks=1 00:08:16.312 00:08:16.312 ' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.312 --rc genhtml_branch_coverage=1 00:08:16.312 --rc genhtml_function_coverage=1 00:08:16.312 --rc genhtml_legend=1 00:08:16.312 --rc geninfo_all_blocks=1 00:08:16.312 --rc geninfo_unexecuted_blocks=1 00:08:16.312 00:08:16.312 ' 00:08:16.312 13:28:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57253 00:08:16.312 13:28:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.312 13:28:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:16.312 13:28:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57253 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@835 -- # '[' -z 57253 ']' 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.312 13:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.573 [2024-11-20 13:28:15.819397] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:16.574 [2024-11-20 13:28:15.819815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57253 ] 00:08:16.574 [2024-11-20 13:28:15.988199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.834 [2024-11-20 13:28:16.135993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:16.834 [2024-11-20 13:28:16.136311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57253' to capture a snapshot of events at runtime. 00:08:16.834 [2024-11-20 13:28:16.136401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.834 [2024-11-20 13:28:16.136437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.834 [2024-11-20 13:28:16.136459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57253 for offline analysis/debug. 00:08:16.834 [2024-11-20 13:28:16.137488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.775 13:28:16 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.775 13:28:16 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:17.775 13:28:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:17.775 13:28:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:17.775 13:28:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:17.775 13:28:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:17.775 13:28:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.775 13:28:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.775 13:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 ************************************ 00:08:17.775 START TEST rpc_integrity 00:08:17.775 ************************************ 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:17.775 13:28:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.775 13:28:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:17.775 13:28:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:17.775 13:28:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:17.775 13:28:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.775 13:28:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:17.775 { 00:08:17.775 "name": "Malloc0", 00:08:17.775 "aliases": [ 00:08:17.775 "9cb20757-4d25-488d-b536-b457df37cc4a" 00:08:17.775 ], 00:08:17.775 "product_name": "Malloc disk", 00:08:17.775 "block_size": 512, 00:08:17.775 "num_blocks": 16384, 00:08:17.775 "uuid": "9cb20757-4d25-488d-b536-b457df37cc4a", 00:08:17.775 "assigned_rate_limits": { 00:08:17.775 "rw_ios_per_sec": 0, 00:08:17.775 "rw_mbytes_per_sec": 0, 00:08:17.775 "r_mbytes_per_sec": 0, 00:08:17.775 "w_mbytes_per_sec": 0 00:08:17.775 }, 00:08:17.775 "claimed": false, 00:08:17.775 "zoned": false, 00:08:17.775 "supported_io_types": { 00:08:17.775 "read": true, 00:08:17.775 "write": true, 00:08:17.775 "unmap": true, 00:08:17.775 "flush": true, 00:08:17.775 "reset": true, 00:08:17.775 "nvme_admin": false, 00:08:17.775 "nvme_io": false, 00:08:17.775 "nvme_io_md": false, 00:08:17.775 "write_zeroes": true, 00:08:17.775 "zcopy": true, 00:08:17.775 "get_zone_info": false, 00:08:17.775 "zone_management": false, 00:08:17.775 "zone_append": false, 00:08:17.775 "compare": false, 00:08:17.775 "compare_and_write": false, 00:08:17.775 "abort": true, 00:08:17.775 "seek_hole": false, 00:08:17.775 "seek_data": false, 00:08:17.775 "copy": true, 00:08:17.775 "nvme_iov_md": false 00:08:17.775 }, 00:08:17.775 "memory_domains": [ 00:08:17.775 { 00:08:17.775 "dma_device_id": "system", 00:08:17.775 "dma_device_type": 1 00:08:17.775 }, 00:08:17.775 { 00:08:17.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.775 "dma_device_type": 2 00:08:17.775 } 00:08:17.775 ], 00:08:17.775 "driver_specific": {} 00:08:17.775 } 00:08:17.775 ]' 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 [2024-11-20 13:28:17.070599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:17.775 [2024-11-20 13:28:17.070696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.775 [2024-11-20 13:28:17.070732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.775 [2024-11-20 13:28:17.070746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.775 [2024-11-20 13:28:17.073667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.775 [2024-11-20 13:28:17.073753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:17.775 Passthru0 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.775 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.775 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:17.775 { 00:08:17.775 "name": "Malloc0", 00:08:17.775 "aliases": [ 00:08:17.775 "9cb20757-4d25-488d-b536-b457df37cc4a" 00:08:17.775 ], 00:08:17.775 "product_name": "Malloc disk", 00:08:17.775 "block_size": 512, 00:08:17.775 "num_blocks": 16384, 00:08:17.775 "uuid": "9cb20757-4d25-488d-b536-b457df37cc4a", 00:08:17.775 "assigned_rate_limits": { 00:08:17.775 "rw_ios_per_sec": 0, 00:08:17.775 "rw_mbytes_per_sec": 0, 00:08:17.775 "r_mbytes_per_sec": 0, 00:08:17.775 "w_mbytes_per_sec": 0 00:08:17.775 }, 00:08:17.775 "claimed": true, 00:08:17.775 "claim_type": "exclusive_write", 00:08:17.775 "zoned": false, 00:08:17.775 "supported_io_types": { 00:08:17.775 "read": true, 00:08:17.775 "write": true, 00:08:17.775 "unmap": true, 00:08:17.775 "flush": true, 00:08:17.775 "reset": true, 00:08:17.775 "nvme_admin": false, 00:08:17.775 "nvme_io": false, 00:08:17.775 "nvme_io_md": false, 00:08:17.775 "write_zeroes": true, 00:08:17.775 "zcopy": true, 00:08:17.775 "get_zone_info": false, 00:08:17.775 "zone_management": false, 00:08:17.775 "zone_append": false, 00:08:17.776 "compare": false, 00:08:17.776 "compare_and_write": false, 00:08:17.776 "abort": true, 00:08:17.776 "seek_hole": false, 00:08:17.776 "seek_data": false, 00:08:17.776 "copy": true, 00:08:17.776 "nvme_iov_md": false 00:08:17.776 }, 00:08:17.776 "memory_domains": [ 00:08:17.776 { 00:08:17.776 "dma_device_id": "system", 00:08:17.776 "dma_device_type": 1 00:08:17.776 }, 00:08:17.776 { 00:08:17.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.776 "dma_device_type": 2 00:08:17.776 } 00:08:17.776 ], 00:08:17.776 "driver_specific": {} 00:08:17.776 }, 00:08:17.776 { 00:08:17.776 "name": "Passthru0", 00:08:17.776 "aliases": [ 00:08:17.776 "f67acfad-e730-552b-961b-0672bd614300" 00:08:17.776 ], 00:08:17.776 "product_name": "passthru", 00:08:17.776 "block_size": 512, 00:08:17.776 "num_blocks": 16384, 00:08:17.776 "uuid": "f67acfad-e730-552b-961b-0672bd614300", 00:08:17.776 "assigned_rate_limits": { 00:08:17.776 "rw_ios_per_sec": 0, 00:08:17.776 "rw_mbytes_per_sec": 0, 00:08:17.776 "r_mbytes_per_sec": 0, 00:08:17.776 "w_mbytes_per_sec": 0 00:08:17.776 }, 00:08:17.776 "claimed": false, 00:08:17.776 "zoned": false, 00:08:17.776 "supported_io_types": { 00:08:17.776 "read": true, 00:08:17.776 "write": true, 00:08:17.776 "unmap": true, 00:08:17.776 "flush": true, 00:08:17.776 "reset": true, 00:08:17.776 "nvme_admin": false, 00:08:17.776 "nvme_io": false, 00:08:17.776 "nvme_io_md": false, 00:08:17.776 "write_zeroes": true, 00:08:17.776 "zcopy": true, 00:08:17.776 "get_zone_info": false, 00:08:17.776 "zone_management": false, 00:08:17.776 "zone_append": false, 00:08:17.776 "compare": false, 00:08:17.776 "compare_and_write": false, 00:08:17.776 "abort": true, 00:08:17.776 "seek_hole": false, 00:08:17.776 "seek_data": false, 00:08:17.776 "copy": true, 00:08:17.776 "nvme_iov_md": false 00:08:17.776 }, 00:08:17.776 "memory_domains": [ 00:08:17.776 { 00:08:17.776 "dma_device_id": "system", 00:08:17.776 "dma_device_type": 1 00:08:17.776 }, 00:08:17.776 { 00:08:17.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.776 "dma_device_type": 2 00:08:17.776 } 00:08:17.776 ], 00:08:17.776 "driver_specific": { 00:08:17.776 "passthru": { 00:08:17.776 "name": "Passthru0", 00:08:17.776 "base_bdev_name": "Malloc0" 00:08:17.776 } 00:08:17.776 } 00:08:17.776 } 00:08:17.776 ]' 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:17.776 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:17.776 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:18.035 ************************************ 00:08:18.035 END TEST rpc_integrity 00:08:18.035 ************************************ 00:08:18.035 13:28:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.035 00:08:18.035 real 0m0.272s 00:08:18.035 user 0m0.138s 00:08:18.035 sys 0m0.037s 00:08:18.035 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.035 13:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 13:28:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:18.035 13:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.035 13:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.035 13:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 ************************************ 00:08:18.035 START TEST rpc_plugins 00:08:18.035 ************************************ 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:18.035 { 00:08:18.035 "name": "Malloc1", 00:08:18.035 "aliases": [ 00:08:18.035 "3bfae747-c3dd-4f1f-8a39-7af8ead705c9" 00:08:18.035 ], 00:08:18.035 "product_name": "Malloc disk", 00:08:18.035 "block_size": 4096, 00:08:18.035 "num_blocks": 256, 00:08:18.035 "uuid": "3bfae747-c3dd-4f1f-8a39-7af8ead705c9", 00:08:18.035 "assigned_rate_limits": { 00:08:18.035 "rw_ios_per_sec": 0, 00:08:18.035 "rw_mbytes_per_sec": 0, 00:08:18.035 "r_mbytes_per_sec": 0, 00:08:18.035 "w_mbytes_per_sec": 0 00:08:18.035 }, 00:08:18.035 "claimed": false, 00:08:18.035 "zoned": false, 00:08:18.035 "supported_io_types": { 00:08:18.035 "read": true, 00:08:18.035 "write": true, 00:08:18.035 "unmap": true, 00:08:18.035 "flush": true, 00:08:18.035 "reset": true, 00:08:18.035 "nvme_admin": false, 00:08:18.035 "nvme_io": false, 00:08:18.035 "nvme_io_md": false, 00:08:18.035 "write_zeroes": true, 00:08:18.035 "zcopy": true, 00:08:18.035 "get_zone_info": false, 00:08:18.035 "zone_management": false, 00:08:18.035 "zone_append": false, 00:08:18.035 "compare": false, 00:08:18.035 "compare_and_write": false, 00:08:18.035 "abort": true, 00:08:18.035 "seek_hole": false, 00:08:18.035 "seek_data": false, 00:08:18.035 "copy": true, 00:08:18.035 "nvme_iov_md": false 00:08:18.035 }, 00:08:18.035 "memory_domains": [ 00:08:18.035 { 00:08:18.035 "dma_device_id": "system", 00:08:18.035 "dma_device_type": 1 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.035 "dma_device_type": 2 00:08:18.035 } 00:08:18.035 ], 00:08:18.035 "driver_specific": {} 00:08:18.035 } 00:08:18.035 ]' 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.035 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.035 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.036 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:18.036 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:18.036 ************************************ 00:08:18.036 END TEST rpc_plugins 00:08:18.036 ************************************ 00:08:18.036 13:28:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:18.036 00:08:18.036 real 0m0.132s 00:08:18.036 user 0m0.068s 00:08:18.036 sys 0m0.021s 00:08:18.036 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.036 13:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.297 13:28:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:18.297 13:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.297 13:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.297 13:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.297 ************************************ 00:08:18.297 START TEST rpc_trace_cmd_test 00:08:18.297 ************************************ 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:18.297 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57253", 00:08:18.297 "tpoint_group_mask": "0x8", 00:08:18.297 "iscsi_conn": { 00:08:18.297 "mask": "0x2", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "scsi": { 00:08:18.297 "mask": "0x4", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "bdev": { 00:08:18.297 "mask": "0x8", 00:08:18.297 "tpoint_mask": "0xffffffffffffffff" 00:08:18.297 }, 00:08:18.297 "nvmf_rdma": { 00:08:18.297 "mask": "0x10", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "nvmf_tcp": { 00:08:18.297 "mask": "0x20", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "ftl": { 00:08:18.297 "mask": "0x40", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "blobfs": { 00:08:18.297 "mask": "0x80", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "dsa": { 00:08:18.297 "mask": "0x200", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "thread": { 00:08:18.297 "mask": "0x400", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "nvme_pcie": { 00:08:18.297 "mask": "0x800", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "iaa": { 00:08:18.297 "mask": "0x1000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "nvme_tcp": { 00:08:18.297 "mask": "0x2000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "bdev_nvme": { 00:08:18.297 "mask": "0x4000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "sock": { 00:08:18.297 "mask": "0x8000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "blob": { 00:08:18.297 "mask": "0x10000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "bdev_raid": { 00:08:18.297 "mask": "0x20000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 }, 00:08:18.297 "scheduler": { 00:08:18.297 "mask": "0x40000", 00:08:18.297 "tpoint_mask": "0x0" 00:08:18.297 } 00:08:18.297 }' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:18.297 ************************************ 00:08:18.297 END TEST rpc_trace_cmd_test 00:08:18.297 ************************************ 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:18.297 00:08:18.297 real 0m0.198s 00:08:18.297 user 0m0.148s 00:08:18.297 sys 0m0.036s 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.297 13:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.558 13:28:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:18.558 13:28:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:18.558 13:28:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:18.558 13:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.558 13:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.558 13:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.558 ************************************ 00:08:18.558 START TEST rpc_daemon_integrity 00:08:18.558 ************************************ 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.558 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.558 { 00:08:18.558 "name": "Malloc2", 00:08:18.558 "aliases": [ 00:08:18.558 "e6910ca5-7f7a-470a-8f03-ad2dfde02046" 00:08:18.558 ], 00:08:18.558 "product_name": "Malloc disk", 00:08:18.558 "block_size": 512, 00:08:18.558 "num_blocks": 16384, 00:08:18.558 "uuid": "e6910ca5-7f7a-470a-8f03-ad2dfde02046", 00:08:18.558 "assigned_rate_limits": { 00:08:18.558 "rw_ios_per_sec": 0, 00:08:18.558 "rw_mbytes_per_sec": 0, 00:08:18.558 "r_mbytes_per_sec": 0, 00:08:18.558 "w_mbytes_per_sec": 0 00:08:18.558 }, 00:08:18.558 "claimed": false, 00:08:18.558 "zoned": false, 00:08:18.558 "supported_io_types": { 00:08:18.558 "read": true, 00:08:18.558 "write": true, 00:08:18.558 "unmap": true, 00:08:18.558 "flush": true, 00:08:18.558 "reset": true, 00:08:18.558 "nvme_admin": false, 00:08:18.558 "nvme_io": false, 00:08:18.558 "nvme_io_md": false, 00:08:18.558 "write_zeroes": true, 00:08:18.558 "zcopy": true, 00:08:18.558 "get_zone_info": false, 00:08:18.558 "zone_management": false, 00:08:18.558 "zone_append": false, 00:08:18.558 "compare": false, 00:08:18.558 "compare_and_write": false, 00:08:18.559 "abort": true, 00:08:18.559 "seek_hole": false, 00:08:18.559 "seek_data": false, 00:08:18.559 "copy": true, 00:08:18.559 "nvme_iov_md": false 00:08:18.559 }, 00:08:18.559 "memory_domains": [ 00:08:18.559 { 00:08:18.559 "dma_device_id": "system", 00:08:18.559 "dma_device_type": 1 00:08:18.559 }, 00:08:18.559 { 00:08:18.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.559 "dma_device_type": 2 00:08:18.559 } 00:08:18.559 ], 00:08:18.559 "driver_specific": {} 00:08:18.559 } 00:08:18.559 ]' 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.559 [2024-11-20 13:28:17.864888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:18.559 [2024-11-20 13:28:17.864998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.559 [2024-11-20 13:28:17.865027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:18.559 [2024-11-20 13:28:17.865042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.559 [2024-11-20 13:28:17.867799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.559 [2024-11-20 13:28:17.867862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.559 Passthru0 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:18.559 { 00:08:18.559 "name": "Malloc2", 00:08:18.559 "aliases": [ 00:08:18.559 "e6910ca5-7f7a-470a-8f03-ad2dfde02046" 00:08:18.559 ], 00:08:18.559 "product_name": "Malloc disk", 00:08:18.559 "block_size": 512, 00:08:18.559 "num_blocks": 16384, 00:08:18.559 "uuid": "e6910ca5-7f7a-470a-8f03-ad2dfde02046", 00:08:18.559 "assigned_rate_limits": { 00:08:18.559 "rw_ios_per_sec": 0, 00:08:18.559 "rw_mbytes_per_sec": 0, 00:08:18.559 "r_mbytes_per_sec": 0, 00:08:18.559 "w_mbytes_per_sec": 0 00:08:18.559 }, 00:08:18.559 "claimed": true, 00:08:18.559 "claim_type": "exclusive_write", 00:08:18.559 "zoned": false, 00:08:18.559 "supported_io_types": { 00:08:18.559 "read": true, 00:08:18.559 "write": true, 00:08:18.559 "unmap": true, 00:08:18.559 "flush": true, 00:08:18.559 "reset": true, 00:08:18.559 "nvme_admin": false, 00:08:18.559 "nvme_io": false, 00:08:18.559 "nvme_io_md": false, 00:08:18.559 "write_zeroes": true, 00:08:18.559 "zcopy": true, 00:08:18.559 "get_zone_info": false, 00:08:18.559 "zone_management": false, 00:08:18.559 "zone_append": false, 00:08:18.559 "compare": false, 00:08:18.559 "compare_and_write": false, 00:08:18.559 "abort": true, 00:08:18.559 "seek_hole": false, 00:08:18.559 "seek_data": false, 00:08:18.559 "copy": true, 00:08:18.559 "nvme_iov_md": false 00:08:18.559 }, 00:08:18.559 "memory_domains": [ 00:08:18.559 { 00:08:18.559 "dma_device_id": "system", 00:08:18.559 "dma_device_type": 1 00:08:18.559 }, 00:08:18.559 { 00:08:18.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.559 "dma_device_type": 2 00:08:18.559 } 00:08:18.559 ], 00:08:18.559 "driver_specific": {} 00:08:18.559 }, 00:08:18.559 { 00:08:18.559 "name": "Passthru0", 00:08:18.559 "aliases": [ 00:08:18.559 "a3a95950-616a-58a4-b9da-e379593651c7" 00:08:18.559 ], 00:08:18.559 "product_name": "passthru", 00:08:18.559 "block_size": 512, 00:08:18.559 "num_blocks": 16384, 00:08:18.559 "uuid": "a3a95950-616a-58a4-b9da-e379593651c7", 00:08:18.559 "assigned_rate_limits": { 00:08:18.559 "rw_ios_per_sec": 0, 00:08:18.559 "rw_mbytes_per_sec": 0, 00:08:18.559 "r_mbytes_per_sec": 0, 00:08:18.559 "w_mbytes_per_sec": 0 00:08:18.559 }, 00:08:18.559 "claimed": false, 00:08:18.559 "zoned": false, 00:08:18.559 "supported_io_types": { 00:08:18.559 "read": true, 00:08:18.559 "write": true, 00:08:18.559 "unmap": true, 00:08:18.559 "flush": true, 00:08:18.559 "reset": true, 00:08:18.559 "nvme_admin": false, 00:08:18.559 "nvme_io": false, 00:08:18.559 "nvme_io_md": false, 00:08:18.559 "write_zeroes": true, 00:08:18.559 "zcopy": true, 00:08:18.559 "get_zone_info": false, 00:08:18.559 "zone_management": false, 00:08:18.559 "zone_append": false, 00:08:18.559 "compare": false, 00:08:18.559 "compare_and_write": false, 00:08:18.559 "abort": true, 00:08:18.559 "seek_hole": false, 00:08:18.559 "seek_data": false, 00:08:18.559 "copy": true, 00:08:18.559 "nvme_iov_md": false 00:08:18.559 }, 00:08:18.559 "memory_domains": [ 00:08:18.559 { 00:08:18.559 "dma_device_id": "system", 00:08:18.559 "dma_device_type": 1 00:08:18.559 }, 00:08:18.559 { 00:08:18.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.559 "dma_device_type": 2 00:08:18.559 } 00:08:18.559 ], 00:08:18.559 "driver_specific": { 00:08:18.559 "passthru": { 00:08:18.559 "name": "Passthru0", 00:08:18.559 "base_bdev_name": "Malloc2" 00:08:18.559 } 00:08:18.559 } 00:08:18.559 } 00:08:18.559 ]' 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:18.559 13:28:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:18.818 ************************************ 00:08:18.818 END TEST rpc_daemon_integrity 00:08:18.818 ************************************ 00:08:18.818 13:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.818 00:08:18.818 real 0m0.258s 00:08:18.818 user 0m0.136s 00:08:18.818 sys 0m0.032s 00:08:18.818 13:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.818 13:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.818 13:28:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:18.818 13:28:18 rpc -- rpc/rpc.sh@84 -- # killprocess 57253 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 57253 ']' 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@958 -- # kill -0 57253 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@959 -- # uname 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57253 00:08:18.818 killing process with pid 57253 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57253' 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@973 -- # kill 57253 00:08:18.818 13:28:18 rpc -- common/autotest_common.sh@978 -- # wait 57253 00:08:20.735 ************************************ 00:08:20.735 END TEST rpc 00:08:20.735 ************************************ 00:08:20.735 00:08:20.735 real 0m4.279s 00:08:20.735 user 0m4.637s 00:08:20.735 sys 0m0.853s 00:08:20.735 13:28:19 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.735 13:28:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 13:28:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:20.735 13:28:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.735 13:28:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.735 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:20.735 ************************************ 00:08:20.735 START TEST skip_rpc 00:08:20.735 ************************************ 00:08:20.736 13:28:19 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:20.736 * Looking for test storage... 00:08:20.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:20.736 13:28:19 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.736 13:28:19 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.736 13:28:19 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.736 13:28:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.736 --rc genhtml_branch_coverage=1 00:08:20.736 --rc genhtml_function_coverage=1 00:08:20.736 --rc genhtml_legend=1 00:08:20.736 --rc geninfo_all_blocks=1 00:08:20.736 --rc geninfo_unexecuted_blocks=1 00:08:20.736 00:08:20.736 ' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.736 --rc genhtml_branch_coverage=1 00:08:20.736 --rc genhtml_function_coverage=1 00:08:20.736 --rc genhtml_legend=1 00:08:20.736 --rc geninfo_all_blocks=1 00:08:20.736 --rc geninfo_unexecuted_blocks=1 00:08:20.736 00:08:20.736 ' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.736 --rc genhtml_branch_coverage=1 00:08:20.736 --rc genhtml_function_coverage=1 00:08:20.736 --rc genhtml_legend=1 00:08:20.736 --rc geninfo_all_blocks=1 00:08:20.736 --rc geninfo_unexecuted_blocks=1 00:08:20.736 00:08:20.736 ' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.736 --rc genhtml_branch_coverage=1 00:08:20.736 --rc genhtml_function_coverage=1 00:08:20.736 --rc genhtml_legend=1 00:08:20.736 --rc geninfo_all_blocks=1 00:08:20.736 --rc geninfo_unexecuted_blocks=1 00:08:20.736 00:08:20.736 ' 00:08:20.736 13:28:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:20.736 13:28:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:20.736 13:28:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.736 13:28:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.736 ************************************ 00:08:20.736 START TEST skip_rpc 00:08:20.736 ************************************ 00:08:20.736 13:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:20.736 13:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57471 00:08:20.736 13:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:20.736 13:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:20.736 13:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:20.997 [2024-11-20 13:28:20.185362] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:20.997 [2024-11-20 13:28:20.185507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:08:20.997 [2024-11-20 13:28:20.351265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.257 [2024-11-20 13:28:20.482159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57471 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57471 ']' 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57471 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57471 00:08:26.546 killing process with pid 57471 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57471' 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57471 00:08:26.546 13:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57471 00:08:27.931 00:08:27.931 real 0m6.835s 00:08:27.931 user 0m6.270s 00:08:27.931 sys 0m0.417s 00:08:27.931 13:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.931 13:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.931 ************************************ 00:08:27.931 END TEST skip_rpc 00:08:27.931 ************************************ 00:08:27.932 13:28:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:27.932 13:28:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.932 13:28:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.932 13:28:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.932 ************************************ 00:08:27.932 START TEST skip_rpc_with_json 00:08:27.932 ************************************ 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57570 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57570 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57570 ']' 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.932 13:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.932 [2024-11-20 13:28:27.091617] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:27.932 [2024-11-20 13:28:27.091778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57570 ] 00:08:27.932 [2024-11-20 13:28:27.256456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.192 [2024-11-20 13:28:27.394547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.810 [2024-11-20 13:28:28.137269] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:28.810 request: 00:08:28.810 { 00:08:28.810 "trtype": "tcp", 00:08:28.810 "method": "nvmf_get_transports", 00:08:28.810 "req_id": 1 00:08:28.810 } 00:08:28.810 Got JSON-RPC error response 00:08:28.810 response: 00:08:28.810 { 00:08:28.810 "code": -19, 00:08:28.810 "message": "No such device" 00:08:28.810 } 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.810 [2024-11-20 13:28:28.149412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.810 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.071 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.071 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:29.071 { 00:08:29.071 "subsystems": [ 00:08:29.071 { 00:08:29.071 "subsystem": "fsdev", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "fsdev_set_opts", 00:08:29.071 "params": { 00:08:29.071 "fsdev_io_pool_size": 65535, 00:08:29.071 "fsdev_io_cache_size": 256 00:08:29.071 } 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "keyring", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "iobuf", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "iobuf_set_options", 00:08:29.071 "params": { 00:08:29.071 "small_pool_count": 8192, 00:08:29.071 "large_pool_count": 1024, 00:08:29.071 "small_bufsize": 8192, 00:08:29.071 "large_bufsize": 135168, 00:08:29.071 "enable_numa": false 00:08:29.071 } 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "sock", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "sock_set_default_impl", 00:08:29.071 "params": { 00:08:29.071 "impl_name": "posix" 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "sock_impl_set_options", 00:08:29.071 "params": { 00:08:29.071 "impl_name": "ssl", 00:08:29.071 "recv_buf_size": 4096, 00:08:29.071 "send_buf_size": 4096, 00:08:29.071 "enable_recv_pipe": true, 00:08:29.071 "enable_quickack": false, 00:08:29.071 "enable_placement_id": 0, 00:08:29.071 "enable_zerocopy_send_server": true, 00:08:29.071 "enable_zerocopy_send_client": false, 00:08:29.071 "zerocopy_threshold": 0, 00:08:29.071 "tls_version": 0, 00:08:29.071 "enable_ktls": false 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "sock_impl_set_options", 00:08:29.071 "params": { 00:08:29.071 "impl_name": "posix", 00:08:29.071 "recv_buf_size": 2097152, 00:08:29.071 "send_buf_size": 2097152, 00:08:29.071 "enable_recv_pipe": true, 00:08:29.071 "enable_quickack": false, 00:08:29.071 "enable_placement_id": 0, 00:08:29.071 "enable_zerocopy_send_server": true, 00:08:29.071 "enable_zerocopy_send_client": false, 00:08:29.071 "zerocopy_threshold": 0, 00:08:29.071 "tls_version": 0, 00:08:29.071 "enable_ktls": false 00:08:29.071 } 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "vmd", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "accel", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "accel_set_options", 00:08:29.071 "params": { 00:08:29.071 "small_cache_size": 128, 00:08:29.071 "large_cache_size": 16, 00:08:29.071 "task_count": 2048, 00:08:29.071 "sequence_count": 2048, 00:08:29.071 "buf_count": 2048 00:08:29.071 } 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "bdev", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "bdev_set_options", 00:08:29.071 "params": { 00:08:29.071 "bdev_io_pool_size": 65535, 00:08:29.071 "bdev_io_cache_size": 256, 00:08:29.071 "bdev_auto_examine": true, 00:08:29.071 "iobuf_small_cache_size": 128, 00:08:29.071 "iobuf_large_cache_size": 16 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "bdev_raid_set_options", 00:08:29.071 "params": { 00:08:29.071 "process_window_size_kb": 1024, 00:08:29.071 "process_max_bandwidth_mb_sec": 0 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "bdev_iscsi_set_options", 00:08:29.071 "params": { 00:08:29.071 "timeout_sec": 30 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "bdev_nvme_set_options", 00:08:29.071 "params": { 00:08:29.071 "action_on_timeout": "none", 00:08:29.071 "timeout_us": 0, 00:08:29.071 "timeout_admin_us": 0, 00:08:29.071 "keep_alive_timeout_ms": 10000, 00:08:29.071 "arbitration_burst": 0, 00:08:29.071 "low_priority_weight": 0, 00:08:29.071 "medium_priority_weight": 0, 00:08:29.071 "high_priority_weight": 0, 00:08:29.071 "nvme_adminq_poll_period_us": 10000, 00:08:29.071 "nvme_ioq_poll_period_us": 0, 00:08:29.071 "io_queue_requests": 0, 00:08:29.071 "delay_cmd_submit": true, 00:08:29.071 "transport_retry_count": 4, 00:08:29.071 "bdev_retry_count": 3, 00:08:29.071 "transport_ack_timeout": 0, 00:08:29.071 "ctrlr_loss_timeout_sec": 0, 00:08:29.071 "reconnect_delay_sec": 0, 00:08:29.071 "fast_io_fail_timeout_sec": 0, 00:08:29.071 "disable_auto_failback": false, 00:08:29.071 "generate_uuids": false, 00:08:29.071 "transport_tos": 0, 00:08:29.071 "nvme_error_stat": false, 00:08:29.071 "rdma_srq_size": 0, 00:08:29.071 "io_path_stat": false, 00:08:29.071 "allow_accel_sequence": false, 00:08:29.071 "rdma_max_cq_size": 0, 00:08:29.071 "rdma_cm_event_timeout_ms": 0, 00:08:29.071 "dhchap_digests": [ 00:08:29.071 "sha256", 00:08:29.071 "sha384", 00:08:29.071 "sha512" 00:08:29.071 ], 00:08:29.071 "dhchap_dhgroups": [ 00:08:29.071 "null", 00:08:29.071 "ffdhe2048", 00:08:29.071 "ffdhe3072", 00:08:29.071 "ffdhe4096", 00:08:29.071 "ffdhe6144", 00:08:29.071 "ffdhe8192" 00:08:29.071 ] 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "bdev_nvme_set_hotplug", 00:08:29.071 "params": { 00:08:29.071 "period_us": 100000, 00:08:29.071 "enable": false 00:08:29.071 } 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "method": "bdev_wait_for_examine" 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "scsi", 00:08:29.071 "config": null 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "scheduler", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "framework_set_scheduler", 00:08:29.071 "params": { 00:08:29.071 "name": "static" 00:08:29.071 } 00:08:29.071 } 00:08:29.071 ] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "vhost_scsi", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "vhost_blk", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "ublk", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "nbd", 00:08:29.071 "config": [] 00:08:29.071 }, 00:08:29.071 { 00:08:29.071 "subsystem": "nvmf", 00:08:29.071 "config": [ 00:08:29.071 { 00:08:29.071 "method": "nvmf_set_config", 00:08:29.071 "params": { 00:08:29.071 "discovery_filter": "match_any", 00:08:29.071 "admin_cmd_passthru": { 00:08:29.071 "identify_ctrlr": false 00:08:29.071 }, 00:08:29.071 "dhchap_digests": [ 00:08:29.071 "sha256", 00:08:29.071 "sha384", 00:08:29.071 "sha512" 00:08:29.071 ], 00:08:29.072 "dhchap_dhgroups": [ 00:08:29.072 "null", 00:08:29.072 "ffdhe2048", 00:08:29.072 "ffdhe3072", 00:08:29.072 "ffdhe4096", 00:08:29.072 "ffdhe6144", 00:08:29.072 "ffdhe8192" 00:08:29.072 ] 00:08:29.072 } 00:08:29.072 }, 00:08:29.072 { 00:08:29.072 "method": "nvmf_set_max_subsystems", 00:08:29.072 "params": { 00:08:29.072 "max_subsystems": 1024 00:08:29.072 } 00:08:29.072 }, 00:08:29.072 { 00:08:29.072 "method": "nvmf_set_crdt", 00:08:29.072 "params": { 00:08:29.072 "crdt1": 0, 00:08:29.072 "crdt2": 0, 00:08:29.072 "crdt3": 0 00:08:29.072 } 00:08:29.072 }, 00:08:29.072 { 00:08:29.072 "method": "nvmf_create_transport", 00:08:29.072 "params": { 00:08:29.072 "trtype": "TCP", 00:08:29.072 "max_queue_depth": 128, 00:08:29.072 "max_io_qpairs_per_ctrlr": 127, 00:08:29.072 "in_capsule_data_size": 4096, 00:08:29.072 "max_io_size": 131072, 00:08:29.072 "io_unit_size": 131072, 00:08:29.072 "max_aq_depth": 128, 00:08:29.072 "num_shared_buffers": 511, 00:08:29.072 "buf_cache_size": 4294967295, 00:08:29.072 "dif_insert_or_strip": false, 00:08:29.072 "zcopy": false, 00:08:29.072 "c2h_success": true, 00:08:29.072 "sock_priority": 0, 00:08:29.072 "abort_timeout_sec": 1, 00:08:29.072 "ack_timeout": 0, 00:08:29.072 "data_wr_pool_size": 0 00:08:29.072 } 00:08:29.072 } 00:08:29.072 ] 00:08:29.072 }, 00:08:29.072 { 00:08:29.072 "subsystem": "iscsi", 00:08:29.072 "config": [ 00:08:29.072 { 00:08:29.072 "method": "iscsi_set_options", 00:08:29.072 "params": { 00:08:29.072 "node_base": "iqn.2016-06.io.spdk", 00:08:29.072 "max_sessions": 128, 00:08:29.072 "max_connections_per_session": 2, 00:08:29.072 "max_queue_depth": 64, 00:08:29.072 "default_time2wait": 2, 00:08:29.072 "default_time2retain": 20, 00:08:29.072 "first_burst_length": 8192, 00:08:29.072 "immediate_data": true, 00:08:29.072 "allow_duplicated_isid": false, 00:08:29.072 "error_recovery_level": 0, 00:08:29.072 "nop_timeout": 60, 00:08:29.072 "nop_in_interval": 30, 00:08:29.072 "disable_chap": false, 00:08:29.072 "require_chap": false, 00:08:29.072 "mutual_chap": false, 00:08:29.072 "chap_group": 0, 00:08:29.072 "max_large_datain_per_connection": 64, 00:08:29.072 "max_r2t_per_connection": 4, 00:08:29.072 "pdu_pool_size": 36864, 00:08:29.072 "immediate_data_pool_size": 16384, 00:08:29.072 "data_out_pool_size": 2048 00:08:29.072 } 00:08:29.072 } 00:08:29.072 ] 00:08:29.072 } 00:08:29.072 ] 00:08:29.072 } 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57570 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57570 ']' 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57570 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57570 00:08:29.072 killing process with pid 57570 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57570' 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57570 00:08:29.072 13:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57570 00:08:30.985 13:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57615 00:08:30.985 13:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:30.985 13:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57615 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57615 ']' 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57615 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57615 00:08:36.373 killing process with pid 57615 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57615' 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57615 00:08:36.373 13:28:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57615 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:37.809 00:08:37.809 real 0m9.863s 00:08:37.809 user 0m9.253s 00:08:37.809 sys 0m0.860s 00:08:37.809 ************************************ 00:08:37.809 END TEST skip_rpc_with_json 00:08:37.809 ************************************ 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 13:28:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:37.809 13:28:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.809 13:28:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.809 13:28:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 ************************************ 00:08:37.809 START TEST skip_rpc_with_delay 00:08:37.809 ************************************ 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:37.809 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:37.810 13:28:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:37.810 [2024-11-20 13:28:37.022139] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.810 00:08:37.810 real 0m0.146s 00:08:37.810 user 0m0.065s 00:08:37.810 sys 0m0.078s 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.810 13:28:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:37.810 ************************************ 00:08:37.810 END TEST skip_rpc_with_delay 00:08:37.810 ************************************ 00:08:37.810 13:28:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:37.810 13:28:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:37.810 13:28:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:37.810 13:28:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.810 13:28:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.810 13:28:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.810 ************************************ 00:08:37.810 START TEST exit_on_failed_rpc_init 00:08:37.810 ************************************ 00:08:37.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57743 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57743 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57743 ']' 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.810 13:28:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:38.071 [2024-11-20 13:28:37.260552] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:38.071 [2024-11-20 13:28:37.260703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57743 ] 00:08:38.071 [2024-11-20 13:28:37.425531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.331 [2024-11-20 13:28:37.567644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:38.903 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:39.165 [2024-11-20 13:28:38.400137] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:39.165 [2024-11-20 13:28:38.400328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57761 ] 00:08:39.165 [2024-11-20 13:28:38.560152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.426 [2024-11-20 13:28:38.703158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.426 [2024-11-20 13:28:38.703269] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:39.426 [2024-11-20 13:28:38.703285] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:39.426 [2024-11-20 13:28:38.703301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57743 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57743 ']' 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57743 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57743 00:08:39.685 killing process with pid 57743 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57743' 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57743 00:08:39.685 13:28:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57743 00:08:41.605 ************************************ 00:08:41.605 END TEST exit_on_failed_rpc_init 00:08:41.605 ************************************ 00:08:41.605 00:08:41.605 real 0m3.519s 00:08:41.605 user 0m3.771s 00:08:41.605 sys 0m0.589s 00:08:41.605 13:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.605 13:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:41.605 13:28:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:41.605 00:08:41.605 real 0m20.825s 00:08:41.605 user 0m19.506s 00:08:41.605 sys 0m2.149s 00:08:41.605 ************************************ 00:08:41.605 END TEST skip_rpc 00:08:41.605 ************************************ 00:08:41.605 13:28:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.605 13:28:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.605 13:28:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:41.605 13:28:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.605 13:28:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.605 13:28:40 -- common/autotest_common.sh@10 -- # set +x 00:08:41.605 ************************************ 00:08:41.605 START TEST rpc_client 00:08:41.605 ************************************ 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:41.605 * Looking for test storage... 00:08:41.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.605 13:28:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.605 --rc genhtml_branch_coverage=1 00:08:41.605 --rc genhtml_function_coverage=1 00:08:41.605 --rc genhtml_legend=1 00:08:41.605 --rc geninfo_all_blocks=1 00:08:41.605 --rc geninfo_unexecuted_blocks=1 00:08:41.605 00:08:41.605 ' 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.605 --rc genhtml_branch_coverage=1 00:08:41.605 --rc genhtml_function_coverage=1 00:08:41.605 --rc genhtml_legend=1 00:08:41.605 --rc geninfo_all_blocks=1 00:08:41.605 --rc geninfo_unexecuted_blocks=1 00:08:41.605 00:08:41.605 ' 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.605 --rc genhtml_branch_coverage=1 00:08:41.605 --rc genhtml_function_coverage=1 00:08:41.605 --rc genhtml_legend=1 00:08:41.605 --rc geninfo_all_blocks=1 00:08:41.605 --rc geninfo_unexecuted_blocks=1 00:08:41.605 00:08:41.605 ' 00:08:41.605 13:28:40 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.605 --rc genhtml_branch_coverage=1 00:08:41.605 --rc genhtml_function_coverage=1 00:08:41.605 --rc genhtml_legend=1 00:08:41.605 --rc geninfo_all_blocks=1 00:08:41.605 --rc geninfo_unexecuted_blocks=1 00:08:41.605 00:08:41.605 ' 00:08:41.605 13:28:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:41.605 OK 00:08:41.605 13:28:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:41.605 00:08:41.605 real 0m0.211s 00:08:41.605 user 0m0.114s 00:08:41.605 sys 0m0.100s 00:08:41.605 13:28:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.605 13:28:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:41.605 ************************************ 00:08:41.605 END TEST rpc_client 00:08:41.605 ************************************ 00:08:41.868 13:28:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:41.868 13:28:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.868 13:28:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.868 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:08:41.868 ************************************ 00:08:41.868 START TEST json_config 00:08:41.868 ************************************ 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.868 13:28:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.868 13:28:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.868 13:28:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.868 13:28:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.868 13:28:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.868 13:28:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:41.868 13:28:41 json_config -- scripts/common.sh@345 -- # : 1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.868 13:28:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.868 13:28:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@353 -- # local d=1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.868 13:28:41 json_config -- scripts/common.sh@355 -- # echo 1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.868 13:28:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@353 -- # local d=2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.868 13:28:41 json_config -- scripts/common.sh@355 -- # echo 2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.868 13:28:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.868 13:28:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.868 13:28:41 json_config -- scripts/common.sh@368 -- # return 0 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.868 --rc genhtml_branch_coverage=1 00:08:41.868 --rc genhtml_function_coverage=1 00:08:41.868 --rc genhtml_legend=1 00:08:41.868 --rc geninfo_all_blocks=1 00:08:41.868 --rc geninfo_unexecuted_blocks=1 00:08:41.868 00:08:41.868 ' 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.868 --rc genhtml_branch_coverage=1 00:08:41.868 --rc genhtml_function_coverage=1 00:08:41.868 --rc genhtml_legend=1 00:08:41.868 --rc geninfo_all_blocks=1 00:08:41.868 --rc geninfo_unexecuted_blocks=1 00:08:41.868 00:08:41.868 ' 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.868 --rc genhtml_branch_coverage=1 00:08:41.868 --rc genhtml_function_coverage=1 00:08:41.868 --rc genhtml_legend=1 00:08:41.868 --rc geninfo_all_blocks=1 00:08:41.868 --rc geninfo_unexecuted_blocks=1 00:08:41.868 00:08:41.868 ' 00:08:41.868 13:28:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.868 --rc genhtml_branch_coverage=1 00:08:41.868 --rc genhtml_function_coverage=1 00:08:41.868 --rc genhtml_legend=1 00:08:41.868 --rc geninfo_all_blocks=1 00:08:41.868 --rc geninfo_unexecuted_blocks=1 00:08:41.868 00:08:41.868 ' 00:08:41.868 13:28:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:304320ee-eac1-42a3-9c03-847f5b09ca5b 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=304320ee-eac1-42a3-9c03-847f5b09ca5b 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.868 13:28:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.868 13:28:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.868 13:28:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.868 13:28:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.868 13:28:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.868 13:28:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.868 13:28:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.868 13:28:41 json_config -- paths/export.sh@5 -- # export PATH 00:08:41.868 13:28:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@51 -- # : 0 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.868 13:28:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.869 13:28:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.869 13:28:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.869 13:28:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:41.869 WARNING: No tests are enabled so not running JSON configuration tests 00:08:41.869 13:28:41 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:41.869 00:08:41.869 real 0m0.157s 00:08:41.869 user 0m0.090s 00:08:41.869 sys 0m0.063s 00:08:41.869 13:28:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.869 13:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:41.869 ************************************ 00:08:41.869 END TEST json_config 00:08:41.869 ************************************ 00:08:42.130 13:28:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:42.130 13:28:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.130 13:28:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.130 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.130 ************************************ 00:08:42.130 START TEST json_config_extra_key 00:08:42.130 ************************************ 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.130 --rc genhtml_branch_coverage=1 00:08:42.130 --rc genhtml_function_coverage=1 00:08:42.130 --rc genhtml_legend=1 00:08:42.130 --rc geninfo_all_blocks=1 00:08:42.130 --rc geninfo_unexecuted_blocks=1 00:08:42.130 00:08:42.130 ' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.130 --rc genhtml_branch_coverage=1 00:08:42.130 --rc genhtml_function_coverage=1 00:08:42.130 --rc genhtml_legend=1 00:08:42.130 --rc geninfo_all_blocks=1 00:08:42.130 --rc geninfo_unexecuted_blocks=1 00:08:42.130 00:08:42.130 ' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.130 --rc genhtml_branch_coverage=1 00:08:42.130 --rc genhtml_function_coverage=1 00:08:42.130 --rc genhtml_legend=1 00:08:42.130 --rc geninfo_all_blocks=1 00:08:42.130 --rc geninfo_unexecuted_blocks=1 00:08:42.130 00:08:42.130 ' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.130 --rc genhtml_branch_coverage=1 00:08:42.130 --rc genhtml_function_coverage=1 00:08:42.130 --rc genhtml_legend=1 00:08:42.130 --rc geninfo_all_blocks=1 00:08:42.130 --rc geninfo_unexecuted_blocks=1 00:08:42.130 00:08:42.130 ' 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:304320ee-eac1-42a3-9c03-847f5b09ca5b 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=304320ee-eac1-42a3-9c03-847f5b09ca5b 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.130 13:28:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.130 13:28:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.130 13:28:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.130 13:28:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.130 13:28:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:42.130 13:28:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.130 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.130 13:28:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:42.130 INFO: launching applications... 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:42.130 13:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57960 00:08:42.130 Waiting for target to run... 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57960 /var/tmp/spdk_tgt.sock 00:08:42.130 13:28:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57960 ']' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:42.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.130 13:28:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:42.391 [2024-11-20 13:28:41.572095] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:42.391 [2024-11-20 13:28:41.572499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57960 ] 00:08:42.652 [2024-11-20 13:28:42.008540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.912 [2024-11-20 13:28:42.135394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.484 00:08:43.484 INFO: shutting down applications... 00:08:43.484 13:28:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.484 13:28:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:43.484 13:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:43.484 13:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57960 ]] 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57960 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57960 00:08:43.484 13:28:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:44.057 13:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:44.057 13:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:44.057 13:28:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57960 00:08:44.057 13:28:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:44.317 13:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:44.317 13:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:44.317 13:28:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57960 00:08:44.317 13:28:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:44.889 13:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:44.889 13:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:44.889 13:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57960 00:08:44.889 13:28:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57960 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:45.461 13:28:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:45.461 SPDK target shutdown done 00:08:45.461 Success 00:08:45.461 13:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:45.461 ************************************ 00:08:45.461 END TEST json_config_extra_key 00:08:45.461 ************************************ 00:08:45.461 00:08:45.461 real 0m3.386s 00:08:45.461 user 0m3.047s 00:08:45.461 sys 0m0.596s 00:08:45.461 13:28:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.461 13:28:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:45.461 13:28:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:45.461 13:28:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.461 13:28:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.461 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.461 ************************************ 00:08:45.461 START TEST alias_rpc 00:08:45.461 ************************************ 00:08:45.461 13:28:44 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:45.461 * Looking for test storage... 00:08:45.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:45.461 13:28:44 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.461 13:28:44 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.461 13:28:44 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.721 13:28:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.721 --rc genhtml_branch_coverage=1 00:08:45.721 --rc genhtml_function_coverage=1 00:08:45.721 --rc genhtml_legend=1 00:08:45.721 --rc geninfo_all_blocks=1 00:08:45.721 --rc geninfo_unexecuted_blocks=1 00:08:45.721 00:08:45.721 ' 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.721 --rc genhtml_branch_coverage=1 00:08:45.721 --rc genhtml_function_coverage=1 00:08:45.721 --rc genhtml_legend=1 00:08:45.721 --rc geninfo_all_blocks=1 00:08:45.721 --rc geninfo_unexecuted_blocks=1 00:08:45.721 00:08:45.721 ' 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.721 --rc genhtml_branch_coverage=1 00:08:45.721 --rc genhtml_function_coverage=1 00:08:45.721 --rc genhtml_legend=1 00:08:45.721 --rc geninfo_all_blocks=1 00:08:45.721 --rc geninfo_unexecuted_blocks=1 00:08:45.721 00:08:45.721 ' 00:08:45.721 13:28:44 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.721 --rc genhtml_branch_coverage=1 00:08:45.721 --rc genhtml_function_coverage=1 00:08:45.721 --rc genhtml_legend=1 00:08:45.721 --rc geninfo_all_blocks=1 00:08:45.721 --rc geninfo_unexecuted_blocks=1 00:08:45.721 00:08:45.721 ' 00:08:45.721 13:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:45.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.721 13:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58064 00:08:45.722 13:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58064 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58064 ']' 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.722 13:28:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.722 13:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:45.722 [2024-11-20 13:28:45.032486] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:45.722 [2024-11-20 13:28:45.032640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:08:45.983 [2024-11-20 13:28:45.198956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.983 [2024-11-20 13:28:45.366701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.988 13:28:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.988 13:28:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:46.988 13:28:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:46.988 13:28:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58064 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58064 ']' 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58064 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58064 00:08:46.989 killing process with pid 58064 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58064' 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 58064 00:08:46.989 13:28:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 58064 00:08:48.902 ************************************ 00:08:48.902 END TEST alias_rpc 00:08:48.902 ************************************ 00:08:48.902 00:08:48.902 real 0m3.279s 00:08:48.902 user 0m3.246s 00:08:48.902 sys 0m0.573s 00:08:48.902 13:28:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.902 13:28:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 13:28:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:48.902 13:28:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:48.902 13:28:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.902 13:28:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.902 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:08:48.902 ************************************ 00:08:48.902 START TEST spdkcli_tcp 00:08:48.902 ************************************ 00:08:48.902 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:48.902 * Looking for test storage... 00:08:48.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:48.902 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.902 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.902 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.902 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.902 13:28:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.903 13:28:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.903 --rc genhtml_branch_coverage=1 00:08:48.903 --rc genhtml_function_coverage=1 00:08:48.903 --rc genhtml_legend=1 00:08:48.903 --rc geninfo_all_blocks=1 00:08:48.903 --rc geninfo_unexecuted_blocks=1 00:08:48.903 00:08:48.903 ' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.903 --rc genhtml_branch_coverage=1 00:08:48.903 --rc genhtml_function_coverage=1 00:08:48.903 --rc genhtml_legend=1 00:08:48.903 --rc geninfo_all_blocks=1 00:08:48.903 --rc geninfo_unexecuted_blocks=1 00:08:48.903 00:08:48.903 ' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.903 --rc genhtml_branch_coverage=1 00:08:48.903 --rc genhtml_function_coverage=1 00:08:48.903 --rc genhtml_legend=1 00:08:48.903 --rc geninfo_all_blocks=1 00:08:48.903 --rc geninfo_unexecuted_blocks=1 00:08:48.903 00:08:48.903 ' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.903 --rc genhtml_branch_coverage=1 00:08:48.903 --rc genhtml_function_coverage=1 00:08:48.903 --rc genhtml_legend=1 00:08:48.903 --rc geninfo_all_blocks=1 00:08:48.903 --rc geninfo_unexecuted_blocks=1 00:08:48.903 00:08:48.903 ' 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58160 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58160 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58160 ']' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.903 13:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.903 13:28:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:49.164 [2024-11-20 13:28:48.400860] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:49.164 [2024-11-20 13:28:48.401040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:08:49.164 [2024-11-20 13:28:48.568044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.423 [2024-11-20 13:28:48.705778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.423 [2024-11-20 13:28:48.705902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.362 13:28:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.362 13:28:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:50.362 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58177 00:08:50.362 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:50.362 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:50.362 [ 00:08:50.362 "bdev_malloc_delete", 00:08:50.362 "bdev_malloc_create", 00:08:50.362 "bdev_null_resize", 00:08:50.362 "bdev_null_delete", 00:08:50.362 "bdev_null_create", 00:08:50.362 "bdev_nvme_cuse_unregister", 00:08:50.362 "bdev_nvme_cuse_register", 00:08:50.362 "bdev_opal_new_user", 00:08:50.362 "bdev_opal_set_lock_state", 00:08:50.362 "bdev_opal_delete", 00:08:50.362 "bdev_opal_get_info", 00:08:50.362 "bdev_opal_create", 00:08:50.362 "bdev_nvme_opal_revert", 00:08:50.362 "bdev_nvme_opal_init", 00:08:50.362 "bdev_nvme_send_cmd", 00:08:50.362 "bdev_nvme_set_keys", 00:08:50.362 "bdev_nvme_get_path_iostat", 00:08:50.362 "bdev_nvme_get_mdns_discovery_info", 00:08:50.362 "bdev_nvme_stop_mdns_discovery", 00:08:50.362 "bdev_nvme_start_mdns_discovery", 00:08:50.362 "bdev_nvme_set_multipath_policy", 00:08:50.362 "bdev_nvme_set_preferred_path", 00:08:50.362 "bdev_nvme_get_io_paths", 00:08:50.362 "bdev_nvme_remove_error_injection", 00:08:50.362 "bdev_nvme_add_error_injection", 00:08:50.362 "bdev_nvme_get_discovery_info", 00:08:50.362 "bdev_nvme_stop_discovery", 00:08:50.362 "bdev_nvme_start_discovery", 00:08:50.362 "bdev_nvme_get_controller_health_info", 00:08:50.362 "bdev_nvme_disable_controller", 00:08:50.362 "bdev_nvme_enable_controller", 00:08:50.362 "bdev_nvme_reset_controller", 00:08:50.362 "bdev_nvme_get_transport_statistics", 00:08:50.362 "bdev_nvme_apply_firmware", 00:08:50.362 "bdev_nvme_detach_controller", 00:08:50.362 "bdev_nvme_get_controllers", 00:08:50.362 "bdev_nvme_attach_controller", 00:08:50.362 "bdev_nvme_set_hotplug", 00:08:50.362 "bdev_nvme_set_options", 00:08:50.362 "bdev_passthru_delete", 00:08:50.362 "bdev_passthru_create", 00:08:50.362 "bdev_lvol_set_parent_bdev", 00:08:50.362 "bdev_lvol_set_parent", 00:08:50.362 "bdev_lvol_check_shallow_copy", 00:08:50.362 "bdev_lvol_start_shallow_copy", 00:08:50.362 "bdev_lvol_grow_lvstore", 00:08:50.362 "bdev_lvol_get_lvols", 00:08:50.362 "bdev_lvol_get_lvstores", 00:08:50.362 "bdev_lvol_delete", 00:08:50.363 "bdev_lvol_set_read_only", 00:08:50.363 "bdev_lvol_resize", 00:08:50.363 "bdev_lvol_decouple_parent", 00:08:50.363 "bdev_lvol_inflate", 00:08:50.363 "bdev_lvol_rename", 00:08:50.363 "bdev_lvol_clone_bdev", 00:08:50.363 "bdev_lvol_clone", 00:08:50.363 "bdev_lvol_snapshot", 00:08:50.363 "bdev_lvol_create", 00:08:50.363 "bdev_lvol_delete_lvstore", 00:08:50.363 "bdev_lvol_rename_lvstore", 00:08:50.363 "bdev_lvol_create_lvstore", 00:08:50.363 "bdev_raid_set_options", 00:08:50.363 "bdev_raid_remove_base_bdev", 00:08:50.363 "bdev_raid_add_base_bdev", 00:08:50.363 "bdev_raid_delete", 00:08:50.363 "bdev_raid_create", 00:08:50.363 "bdev_raid_get_bdevs", 00:08:50.363 "bdev_error_inject_error", 00:08:50.363 "bdev_error_delete", 00:08:50.363 "bdev_error_create", 00:08:50.363 "bdev_split_delete", 00:08:50.363 "bdev_split_create", 00:08:50.363 "bdev_delay_delete", 00:08:50.363 "bdev_delay_create", 00:08:50.363 "bdev_delay_update_latency", 00:08:50.363 "bdev_zone_block_delete", 00:08:50.363 "bdev_zone_block_create", 00:08:50.363 "blobfs_create", 00:08:50.363 "blobfs_detect", 00:08:50.363 "blobfs_set_cache_size", 00:08:50.363 "bdev_xnvme_delete", 00:08:50.363 "bdev_xnvme_create", 00:08:50.363 "bdev_aio_delete", 00:08:50.363 "bdev_aio_rescan", 00:08:50.363 "bdev_aio_create", 00:08:50.363 "bdev_ftl_set_property", 00:08:50.363 "bdev_ftl_get_properties", 00:08:50.363 "bdev_ftl_get_stats", 00:08:50.363 "bdev_ftl_unmap", 00:08:50.363 "bdev_ftl_unload", 00:08:50.363 "bdev_ftl_delete", 00:08:50.363 "bdev_ftl_load", 00:08:50.363 "bdev_ftl_create", 00:08:50.363 "bdev_virtio_attach_controller", 00:08:50.363 "bdev_virtio_scsi_get_devices", 00:08:50.363 "bdev_virtio_detach_controller", 00:08:50.363 "bdev_virtio_blk_set_hotplug", 00:08:50.363 "bdev_iscsi_delete", 00:08:50.363 "bdev_iscsi_create", 00:08:50.363 "bdev_iscsi_set_options", 00:08:50.363 "accel_error_inject_error", 00:08:50.363 "ioat_scan_accel_module", 00:08:50.363 "dsa_scan_accel_module", 00:08:50.363 "iaa_scan_accel_module", 00:08:50.363 "keyring_file_remove_key", 00:08:50.363 "keyring_file_add_key", 00:08:50.363 "keyring_linux_set_options", 00:08:50.363 "fsdev_aio_delete", 00:08:50.363 "fsdev_aio_create", 00:08:50.363 "iscsi_get_histogram", 00:08:50.363 "iscsi_enable_histogram", 00:08:50.363 "iscsi_set_options", 00:08:50.363 "iscsi_get_auth_groups", 00:08:50.363 "iscsi_auth_group_remove_secret", 00:08:50.363 "iscsi_auth_group_add_secret", 00:08:50.363 "iscsi_delete_auth_group", 00:08:50.363 "iscsi_create_auth_group", 00:08:50.363 "iscsi_set_discovery_auth", 00:08:50.363 "iscsi_get_options", 00:08:50.363 "iscsi_target_node_request_logout", 00:08:50.363 "iscsi_target_node_set_redirect", 00:08:50.363 "iscsi_target_node_set_auth", 00:08:50.363 "iscsi_target_node_add_lun", 00:08:50.363 "iscsi_get_stats", 00:08:50.363 "iscsi_get_connections", 00:08:50.363 "iscsi_portal_group_set_auth", 00:08:50.363 "iscsi_start_portal_group", 00:08:50.363 "iscsi_delete_portal_group", 00:08:50.363 "iscsi_create_portal_group", 00:08:50.363 "iscsi_get_portal_groups", 00:08:50.363 "iscsi_delete_target_node", 00:08:50.363 "iscsi_target_node_remove_pg_ig_maps", 00:08:50.363 "iscsi_target_node_add_pg_ig_maps", 00:08:50.363 "iscsi_create_target_node", 00:08:50.363 "iscsi_get_target_nodes", 00:08:50.363 "iscsi_delete_initiator_group", 00:08:50.363 "iscsi_initiator_group_remove_initiators", 00:08:50.363 "iscsi_initiator_group_add_initiators", 00:08:50.363 "iscsi_create_initiator_group", 00:08:50.363 "iscsi_get_initiator_groups", 00:08:50.363 "nvmf_set_crdt", 00:08:50.363 "nvmf_set_config", 00:08:50.363 "nvmf_set_max_subsystems", 00:08:50.363 "nvmf_stop_mdns_prr", 00:08:50.363 "nvmf_publish_mdns_prr", 00:08:50.363 "nvmf_subsystem_get_listeners", 00:08:50.363 "nvmf_subsystem_get_qpairs", 00:08:50.363 "nvmf_subsystem_get_controllers", 00:08:50.363 "nvmf_get_stats", 00:08:50.363 "nvmf_get_transports", 00:08:50.363 "nvmf_create_transport", 00:08:50.363 "nvmf_get_targets", 00:08:50.363 "nvmf_delete_target", 00:08:50.363 "nvmf_create_target", 00:08:50.363 "nvmf_subsystem_allow_any_host", 00:08:50.363 "nvmf_subsystem_set_keys", 00:08:50.363 "nvmf_subsystem_remove_host", 00:08:50.363 "nvmf_subsystem_add_host", 00:08:50.363 "nvmf_ns_remove_host", 00:08:50.363 "nvmf_ns_add_host", 00:08:50.363 "nvmf_subsystem_remove_ns", 00:08:50.363 "nvmf_subsystem_set_ns_ana_group", 00:08:50.363 "nvmf_subsystem_add_ns", 00:08:50.363 "nvmf_subsystem_listener_set_ana_state", 00:08:50.363 "nvmf_discovery_get_referrals", 00:08:50.363 "nvmf_discovery_remove_referral", 00:08:50.363 "nvmf_discovery_add_referral", 00:08:50.363 "nvmf_subsystem_remove_listener", 00:08:50.363 "nvmf_subsystem_add_listener", 00:08:50.363 "nvmf_delete_subsystem", 00:08:50.363 "nvmf_create_subsystem", 00:08:50.363 "nvmf_get_subsystems", 00:08:50.363 "env_dpdk_get_mem_stats", 00:08:50.363 "nbd_get_disks", 00:08:50.363 "nbd_stop_disk", 00:08:50.363 "nbd_start_disk", 00:08:50.363 "ublk_recover_disk", 00:08:50.363 "ublk_get_disks", 00:08:50.363 "ublk_stop_disk", 00:08:50.363 "ublk_start_disk", 00:08:50.363 "ublk_destroy_target", 00:08:50.363 "ublk_create_target", 00:08:50.363 "virtio_blk_create_transport", 00:08:50.363 "virtio_blk_get_transports", 00:08:50.363 "vhost_controller_set_coalescing", 00:08:50.363 "vhost_get_controllers", 00:08:50.363 "vhost_delete_controller", 00:08:50.363 "vhost_create_blk_controller", 00:08:50.363 "vhost_scsi_controller_remove_target", 00:08:50.363 "vhost_scsi_controller_add_target", 00:08:50.363 "vhost_start_scsi_controller", 00:08:50.363 "vhost_create_scsi_controller", 00:08:50.363 "thread_set_cpumask", 00:08:50.363 "scheduler_set_options", 00:08:50.363 "framework_get_governor", 00:08:50.363 "framework_get_scheduler", 00:08:50.363 "framework_set_scheduler", 00:08:50.363 "framework_get_reactors", 00:08:50.363 "thread_get_io_channels", 00:08:50.363 "thread_get_pollers", 00:08:50.363 "thread_get_stats", 00:08:50.363 "framework_monitor_context_switch", 00:08:50.363 "spdk_kill_instance", 00:08:50.363 "log_enable_timestamps", 00:08:50.363 "log_get_flags", 00:08:50.363 "log_clear_flag", 00:08:50.363 "log_set_flag", 00:08:50.363 "log_get_level", 00:08:50.363 "log_set_level", 00:08:50.363 "log_get_print_level", 00:08:50.363 "log_set_print_level", 00:08:50.363 "framework_enable_cpumask_locks", 00:08:50.363 "framework_disable_cpumask_locks", 00:08:50.363 "framework_wait_init", 00:08:50.363 "framework_start_init", 00:08:50.363 "scsi_get_devices", 00:08:50.363 "bdev_get_histogram", 00:08:50.363 "bdev_enable_histogram", 00:08:50.363 "bdev_set_qos_limit", 00:08:50.363 "bdev_set_qd_sampling_period", 00:08:50.363 "bdev_get_bdevs", 00:08:50.363 "bdev_reset_iostat", 00:08:50.363 "bdev_get_iostat", 00:08:50.363 "bdev_examine", 00:08:50.363 "bdev_wait_for_examine", 00:08:50.363 "bdev_set_options", 00:08:50.363 "accel_get_stats", 00:08:50.363 "accel_set_options", 00:08:50.363 "accel_set_driver", 00:08:50.363 "accel_crypto_key_destroy", 00:08:50.363 "accel_crypto_keys_get", 00:08:50.363 "accel_crypto_key_create", 00:08:50.363 "accel_assign_opc", 00:08:50.363 "accel_get_module_info", 00:08:50.363 "accel_get_opc_assignments", 00:08:50.363 "vmd_rescan", 00:08:50.363 "vmd_remove_device", 00:08:50.363 "vmd_enable", 00:08:50.363 "sock_get_default_impl", 00:08:50.363 "sock_set_default_impl", 00:08:50.363 "sock_impl_set_options", 00:08:50.363 "sock_impl_get_options", 00:08:50.363 "iobuf_get_stats", 00:08:50.363 "iobuf_set_options", 00:08:50.363 "keyring_get_keys", 00:08:50.363 "framework_get_pci_devices", 00:08:50.363 "framework_get_config", 00:08:50.363 "framework_get_subsystems", 00:08:50.363 "fsdev_set_opts", 00:08:50.363 "fsdev_get_opts", 00:08:50.363 "trace_get_info", 00:08:50.363 "trace_get_tpoint_group_mask", 00:08:50.363 "trace_disable_tpoint_group", 00:08:50.363 "trace_enable_tpoint_group", 00:08:50.363 "trace_clear_tpoint_mask", 00:08:50.363 "trace_set_tpoint_mask", 00:08:50.363 "notify_get_notifications", 00:08:50.363 "notify_get_types", 00:08:50.363 "spdk_get_version", 00:08:50.363 "rpc_get_methods" 00:08:50.363 ] 00:08:50.363 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.363 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:50.363 13:28:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58160 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58160 ']' 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58160 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58160 00:08:50.363 killing process with pid 58160 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58160' 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58160 00:08:50.363 13:28:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58160 00:08:52.276 ************************************ 00:08:52.276 END TEST spdkcli_tcp 00:08:52.276 ************************************ 00:08:52.276 00:08:52.276 real 0m3.370s 00:08:52.276 user 0m5.906s 00:08:52.276 sys 0m0.597s 00:08:52.276 13:28:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.276 13:28:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.276 13:28:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:52.276 13:28:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.276 13:28:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.276 13:28:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.276 ************************************ 00:08:52.276 START TEST dpdk_mem_utility 00:08:52.276 ************************************ 00:08:52.276 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:52.276 * Looking for test storage... 00:08:52.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:52.276 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.276 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.276 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:52.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.536 13:28:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.536 --rc genhtml_branch_coverage=1 00:08:52.536 --rc genhtml_function_coverage=1 00:08:52.536 --rc genhtml_legend=1 00:08:52.536 --rc geninfo_all_blocks=1 00:08:52.536 --rc geninfo_unexecuted_blocks=1 00:08:52.536 00:08:52.536 ' 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.536 --rc genhtml_branch_coverage=1 00:08:52.536 --rc genhtml_function_coverage=1 00:08:52.536 --rc genhtml_legend=1 00:08:52.536 --rc geninfo_all_blocks=1 00:08:52.536 --rc geninfo_unexecuted_blocks=1 00:08:52.536 00:08:52.536 ' 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.536 --rc genhtml_branch_coverage=1 00:08:52.536 --rc genhtml_function_coverage=1 00:08:52.536 --rc genhtml_legend=1 00:08:52.536 --rc geninfo_all_blocks=1 00:08:52.536 --rc geninfo_unexecuted_blocks=1 00:08:52.536 00:08:52.536 ' 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.536 --rc genhtml_branch_coverage=1 00:08:52.536 --rc genhtml_function_coverage=1 00:08:52.536 --rc genhtml_legend=1 00:08:52.536 --rc geninfo_all_blocks=1 00:08:52.536 --rc geninfo_unexecuted_blocks=1 00:08:52.536 00:08:52.536 ' 00:08:52.536 13:28:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:52.536 13:28:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58271 00:08:52.536 13:28:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58271 00:08:52.536 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58271 ']' 00:08:52.537 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.537 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.537 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.537 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.537 13:28:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:52.537 13:28:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:52.537 [2024-11-20 13:28:51.824806] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:52.537 [2024-11-20 13:28:51.825315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58271 ] 00:08:52.796 [2024-11-20 13:28:51.995371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.796 [2024-11-20 13:28:52.140435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.739 13:28:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.739 13:28:52 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:53.739 13:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:53.739 13:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:53.739 13:28:52 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.739 13:28:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:53.739 { 00:08:53.739 "filename": "/tmp/spdk_mem_dump.txt" 00:08:53.739 } 00:08:53.739 13:28:52 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.739 13:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:53.739 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:53.739 1 heaps totaling size 824.000000 MiB 00:08:53.739 size: 824.000000 MiB heap id: 0 00:08:53.739 end heaps---------- 00:08:53.739 9 mempools totaling size 603.782043 MiB 00:08:53.739 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:53.739 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:53.739 size: 100.555481 MiB name: bdev_io_58271 00:08:53.739 size: 50.003479 MiB name: msgpool_58271 00:08:53.739 size: 36.509338 MiB name: fsdev_io_58271 00:08:53.739 size: 21.763794 MiB name: PDU_Pool 00:08:53.739 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:53.739 size: 4.133484 MiB name: evtpool_58271 00:08:53.739 size: 0.026123 MiB name: Session_Pool 00:08:53.739 end mempools------- 00:08:53.739 6 memzones totaling size 4.142822 MiB 00:08:53.739 size: 1.000366 MiB name: RG_ring_0_58271 00:08:53.739 size: 1.000366 MiB name: RG_ring_1_58271 00:08:53.739 size: 1.000366 MiB name: RG_ring_4_58271 00:08:53.739 size: 1.000366 MiB name: RG_ring_5_58271 00:08:53.739 size: 0.125366 MiB name: RG_ring_2_58271 00:08:53.739 size: 0.015991 MiB name: RG_ring_3_58271 00:08:53.739 end memzones------- 00:08:53.739 13:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:53.739 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:08:53.739 list of free elements. size: 16.779907 MiB 00:08:53.739 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:53.739 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:53.739 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:53.739 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:53.739 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:53.740 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:53.740 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:53.740 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:53.740 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:53.740 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:53.740 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:53.740 element at address: 0x20001b400000 with size: 0.561462 MiB 00:08:53.740 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:53.740 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:53.740 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:53.740 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:53.740 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:53.740 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:53.740 list of standard malloc elements. size: 199.289185 MiB 00:08:53.740 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:53.740 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:53.740 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:53.740 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:53.740 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:53.740 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:53.740 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:53.740 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:53.740 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:53.740 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:53.740 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:53.740 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:53.740 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:53.740 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:53.741 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:53.741 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:53.742 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:53.742 list of memzone associated elements. size: 607.930908 MiB 00:08:53.742 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:53.742 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:53.742 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:53.742 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:53.742 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:53.742 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58271_0 00:08:53.742 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:53.742 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58271_0 00:08:53.742 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:53.742 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58271_0 00:08:53.742 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:53.742 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:53.742 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:53.742 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:53.742 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:53.742 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58271_0 00:08:53.742 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:53.742 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58271 00:08:53.742 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:53.742 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58271 00:08:53.742 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:53.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:53.742 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:53.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:53.742 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:53.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:53.742 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:53.742 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:53.742 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:53.742 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58271 00:08:53.742 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:53.742 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58271 00:08:53.742 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:53.742 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58271 00:08:53.742 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:53.742 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58271 00:08:53.742 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:53.742 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58271 00:08:53.742 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:53.742 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58271 00:08:53.742 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:53.742 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:53.742 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:53.742 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:53.742 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:53.742 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:53.742 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:53.742 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58271 00:08:53.742 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:53.742 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58271 00:08:53.742 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:53.742 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:53.742 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:53.742 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:53.742 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:53.742 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58271 00:08:53.742 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:53.742 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:53.742 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:53.742 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58271 00:08:53.742 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:53.742 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58271 00:08:53.742 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:53.742 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58271 00:08:53.742 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:53.742 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:53.742 13:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:53.742 13:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58271 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58271 ']' 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58271 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58271 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58271' 00:08:53.742 killing process with pid 58271 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58271 00:08:53.742 13:28:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58271 00:08:55.658 ************************************ 00:08:55.658 END TEST dpdk_mem_utility 00:08:55.658 ************************************ 00:08:55.658 00:08:55.658 real 0m3.192s 00:08:55.658 user 0m3.144s 00:08:55.658 sys 0m0.562s 00:08:55.658 13:28:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.658 13:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:55.658 13:28:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:55.658 13:28:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.658 13:28:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.658 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:08:55.658 ************************************ 00:08:55.658 START TEST event 00:08:55.658 ************************************ 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:55.658 * Looking for test storage... 00:08:55.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.658 13:28:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.658 13:28:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.658 13:28:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.658 13:28:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.658 13:28:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.658 13:28:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.658 13:28:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.658 13:28:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.658 13:28:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.658 13:28:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.658 13:28:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.658 13:28:54 event -- scripts/common.sh@344 -- # case "$op" in 00:08:55.658 13:28:54 event -- scripts/common.sh@345 -- # : 1 00:08:55.658 13:28:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.658 13:28:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.658 13:28:54 event -- scripts/common.sh@365 -- # decimal 1 00:08:55.658 13:28:54 event -- scripts/common.sh@353 -- # local d=1 00:08:55.658 13:28:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.658 13:28:54 event -- scripts/common.sh@355 -- # echo 1 00:08:55.658 13:28:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.658 13:28:54 event -- scripts/common.sh@366 -- # decimal 2 00:08:55.658 13:28:54 event -- scripts/common.sh@353 -- # local d=2 00:08:55.658 13:28:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.658 13:28:54 event -- scripts/common.sh@355 -- # echo 2 00:08:55.658 13:28:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.658 13:28:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.658 13:28:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.658 13:28:54 event -- scripts/common.sh@368 -- # return 0 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.658 --rc genhtml_branch_coverage=1 00:08:55.658 --rc genhtml_function_coverage=1 00:08:55.658 --rc genhtml_legend=1 00:08:55.658 --rc geninfo_all_blocks=1 00:08:55.658 --rc geninfo_unexecuted_blocks=1 00:08:55.658 00:08:55.658 ' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.658 --rc genhtml_branch_coverage=1 00:08:55.658 --rc genhtml_function_coverage=1 00:08:55.658 --rc genhtml_legend=1 00:08:55.658 --rc geninfo_all_blocks=1 00:08:55.658 --rc geninfo_unexecuted_blocks=1 00:08:55.658 00:08:55.658 ' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.658 --rc genhtml_branch_coverage=1 00:08:55.658 --rc genhtml_function_coverage=1 00:08:55.658 --rc genhtml_legend=1 00:08:55.658 --rc geninfo_all_blocks=1 00:08:55.658 --rc geninfo_unexecuted_blocks=1 00:08:55.658 00:08:55.658 ' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.658 --rc genhtml_branch_coverage=1 00:08:55.658 --rc genhtml_function_coverage=1 00:08:55.658 --rc genhtml_legend=1 00:08:55.658 --rc geninfo_all_blocks=1 00:08:55.658 --rc geninfo_unexecuted_blocks=1 00:08:55.658 00:08:55.658 ' 00:08:55.658 13:28:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:55.658 13:28:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:55.658 13:28:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:55.658 13:28:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.658 13:28:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.658 ************************************ 00:08:55.658 START TEST event_perf 00:08:55.658 ************************************ 00:08:55.658 13:28:55 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:55.658 Running I/O for 1 seconds...[2024-11-20 13:28:55.052340] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:55.658 [2024-11-20 13:28:55.052661] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58368 ] 00:08:55.920 [2024-11-20 13:28:55.219944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.181 [2024-11-20 13:28:55.368954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.181 [2024-11-20 13:28:55.369703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.181 [2024-11-20 13:28:55.369211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.181 [2024-11-20 13:28:55.369887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.124 Running I/O for 1 seconds... 00:08:57.124 lcore 0: 112662 00:08:57.124 lcore 1: 112660 00:08:57.124 lcore 2: 112661 00:08:57.124 lcore 3: 112661 00:08:57.384 done. 00:08:57.384 00:08:57.384 real 0m1.548s 00:08:57.384 user 0m4.296s 00:08:57.384 sys 0m0.118s 00:08:57.384 13:28:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.384 ************************************ 00:08:57.384 END TEST event_perf 00:08:57.384 ************************************ 00:08:57.384 13:28:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:57.384 13:28:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:57.384 13:28:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.384 13:28:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.384 13:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.384 ************************************ 00:08:57.384 START TEST event_reactor 00:08:57.384 ************************************ 00:08:57.384 13:28:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:57.384 [2024-11-20 13:28:56.664841] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:57.384 [2024-11-20 13:28:56.665023] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58413 ] 00:08:57.644 [2024-11-20 13:28:56.826716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.644 [2024-11-20 13:28:56.981559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.028 test_start 00:08:59.028 oneshot 00:08:59.028 tick 100 00:08:59.028 tick 100 00:08:59.028 tick 250 00:08:59.028 tick 100 00:08:59.028 tick 100 00:08:59.028 tick 100 00:08:59.028 tick 250 00:08:59.028 tick 500 00:08:59.028 tick 100 00:08:59.028 tick 100 00:08:59.028 tick 250 00:08:59.028 tick 100 00:08:59.028 tick 100 00:08:59.028 test_end 00:08:59.028 00:08:59.028 real 0m1.532s 00:08:59.028 user 0m1.322s 00:08:59.028 sys 0m0.096s 00:08:59.028 13:28:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.028 ************************************ 00:08:59.028 END TEST event_reactor 00:08:59.028 ************************************ 00:08:59.028 13:28:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:59.028 13:28:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:59.028 13:28:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:59.028 13:28:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.028 13:28:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.028 ************************************ 00:08:59.028 START TEST event_reactor_perf 00:08:59.028 ************************************ 00:08:59.028 13:28:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:59.028 [2024-11-20 13:28:58.267332] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:08:59.028 [2024-11-20 13:28:58.267479] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58450 ] 00:08:59.028 [2024-11-20 13:28:58.434706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.289 [2024-11-20 13:28:58.576894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.677 test_start 00:09:00.677 test_end 00:09:00.677 Performance: 306480 events per second 00:09:00.677 ************************************ 00:09:00.677 END TEST event_reactor_perf 00:09:00.677 ************************************ 00:09:00.677 00:09:00.677 real 0m1.530s 00:09:00.677 user 0m1.322s 00:09:00.677 sys 0m0.092s 00:09:00.677 13:28:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.677 13:28:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.677 13:28:59 event -- event/event.sh@49 -- # uname -s 00:09:00.677 13:28:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:00.677 13:28:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:00.677 13:28:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.677 13:28:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.677 13:28:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:00.677 ************************************ 00:09:00.677 START TEST event_scheduler 00:09:00.677 ************************************ 00:09:00.677 13:28:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:00.677 * Looking for test storage... 00:09:00.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:00.677 13:28:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.677 13:28:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.677 13:28:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.677 13:28:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:00.677 13:28:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:00.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.677 13:29:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.677 --rc genhtml_branch_coverage=1 00:09:00.677 --rc genhtml_function_coverage=1 00:09:00.677 --rc genhtml_legend=1 00:09:00.677 --rc geninfo_all_blocks=1 00:09:00.677 --rc geninfo_unexecuted_blocks=1 00:09:00.677 00:09:00.677 ' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.677 --rc genhtml_branch_coverage=1 00:09:00.677 --rc genhtml_function_coverage=1 00:09:00.677 --rc genhtml_legend=1 00:09:00.677 --rc geninfo_all_blocks=1 00:09:00.677 --rc geninfo_unexecuted_blocks=1 00:09:00.677 00:09:00.677 ' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.677 --rc genhtml_branch_coverage=1 00:09:00.677 --rc genhtml_function_coverage=1 00:09:00.677 --rc genhtml_legend=1 00:09:00.677 --rc geninfo_all_blocks=1 00:09:00.677 --rc geninfo_unexecuted_blocks=1 00:09:00.677 00:09:00.677 ' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.677 --rc genhtml_branch_coverage=1 00:09:00.677 --rc genhtml_function_coverage=1 00:09:00.677 --rc genhtml_legend=1 00:09:00.677 --rc geninfo_all_blocks=1 00:09:00.677 --rc geninfo_unexecuted_blocks=1 00:09:00.677 00:09:00.677 ' 00:09:00.677 13:29:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:00.677 13:29:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58520 00:09:00.677 13:29:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.677 13:29:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58520 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58520 ']' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.677 13:29:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:00.677 13:29:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:00.677 [2024-11-20 13:29:00.094407] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:00.677 [2024-11-20 13:29:00.094932] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58520 ] 00:09:00.939 [2024-11-20 13:29:00.268671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.202 [2024-11-20 13:29:00.416028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.202 [2024-11-20 13:29:00.416587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.202 [2024-11-20 13:29:00.417034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.202 [2024-11-20 13:29:00.417159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.773 13:29:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.773 13:29:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:01.773 13:29:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:01.773 13:29:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 13:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:01.773 POWER: Cannot set governor of lcore 0 to userspace 00:09:01.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:01.773 POWER: Cannot set governor of lcore 0 to performance 00:09:01.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:01.773 POWER: Cannot set governor of lcore 0 to userspace 00:09:01.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:01.773 POWER: Cannot set governor of lcore 0 to userspace 00:09:01.773 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:01.773 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:01.773 POWER: Unable to set Power Management Environment for lcore 0 00:09:01.774 [2024-11-20 13:29:01.024128] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:01.774 [2024-11-20 13:29:01.024187] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:01.774 [2024-11-20 13:29:01.024212] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:01.774 [2024-11-20 13:29:01.024296] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:01.774 [2024-11-20 13:29:01.024308] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:01.774 [2024-11-20 13:29:01.024319] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:01.774 13:29:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.774 13:29:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:01.774 13:29:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.774 13:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.034 [2024-11-20 13:29:01.302635] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:02.035 13:29:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:02.035 13:29:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.035 13:29:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 ************************************ 00:09:02.035 START TEST scheduler_create_thread 00:09:02.035 ************************************ 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 2 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 3 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 4 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 5 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 6 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 7 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 8 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 9 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 10 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.035 13:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.977 13:29:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.977 13:29:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:02.977 13:29:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.977 13:29:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.360 13:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.360 13:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:04.360 13:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:04.360 13:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.360 13:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:05.304 ************************************ 00:09:05.304 END TEST scheduler_create_thread 00:09:05.304 ************************************ 00:09:05.304 13:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.304 00:09:05.304 real 0m3.379s 00:09:05.304 user 0m0.019s 00:09:05.304 sys 0m0.005s 00:09:05.304 13:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.304 13:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:05.565 13:29:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:05.565 13:29:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58520 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58520 ']' 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58520 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58520 00:09:05.565 killing process with pid 58520 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58520' 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58520 00:09:05.565 13:29:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58520 00:09:05.826 [2024-11-20 13:29:05.079947] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:06.770 00:09:06.770 real 0m6.123s 00:09:06.770 user 0m12.573s 00:09:06.770 sys 0m0.484s 00:09:06.770 13:29:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.770 13:29:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 ************************************ 00:09:06.770 END TEST event_scheduler 00:09:06.770 ************************************ 00:09:06.770 13:29:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:06.770 13:29:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:06.770 13:29:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.770 13:29:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.770 13:29:06 event -- common/autotest_common.sh@10 -- # set +x 00:09:06.770 ************************************ 00:09:06.770 START TEST app_repeat 00:09:06.770 ************************************ 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58637 00:09:06.770 Process app_repeat pid: 58637 00:09:06.770 spdk_app_start Round 0 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58637' 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:06.770 13:29:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58637 /var/tmp/spdk-nbd.sock 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58637 ']' 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:06.770 13:29:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.771 13:29:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:06.771 [2024-11-20 13:29:06.083638] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:06.771 [2024-11-20 13:29:06.084072] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58637 ] 00:09:07.031 [2024-11-20 13:29:06.247994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.031 [2024-11-20 13:29:06.386401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.031 [2024-11-20 13:29:06.386429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.603 13:29:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.603 13:29:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:07.603 13:29:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:07.863 Malloc0 00:09:07.863 13:29:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.123 Malloc1 00:09:08.385 13:29:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:08.385 /dev/nbd0 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:08.385 13:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:08.385 13:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:08.385 13:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:08.385 13:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.385 13:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.385 13:29:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.647 1+0 records in 00:09:08.647 1+0 records out 00:09:08.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589198 s, 7.0 MB/s 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.647 13:29:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:08.647 13:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.647 13:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.647 13:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:08.647 /dev/nbd1 00:09:08.647 13:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:08.647 13:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.647 13:29:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.909 1+0 records in 00:09:08.909 1+0 records out 00:09:08.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348915 s, 11.7 MB/s 00:09:08.909 13:29:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.909 13:29:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:08.909 13:29:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.909 13:29:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.909 13:29:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:08.909 13:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.909 13:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.909 13:29:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.909 13:29:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.909 13:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.171 { 00:09:09.171 "nbd_device": "/dev/nbd0", 00:09:09.171 "bdev_name": "Malloc0" 00:09:09.171 }, 00:09:09.171 { 00:09:09.171 "nbd_device": "/dev/nbd1", 00:09:09.171 "bdev_name": "Malloc1" 00:09:09.171 } 00:09:09.171 ]' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.171 { 00:09:09.171 "nbd_device": "/dev/nbd0", 00:09:09.171 "bdev_name": "Malloc0" 00:09:09.171 }, 00:09:09.171 { 00:09:09.171 "nbd_device": "/dev/nbd1", 00:09:09.171 "bdev_name": "Malloc1" 00:09:09.171 } 00:09:09.171 ]' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.171 /dev/nbd1' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.171 /dev/nbd1' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:09.171 256+0 records in 00:09:09.171 256+0 records out 00:09:09.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0076567 s, 137 MB/s 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.171 256+0 records in 00:09:09.171 256+0 records out 00:09:09.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175818 s, 59.6 MB/s 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.171 256+0 records in 00:09:09.171 256+0 records out 00:09:09.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114216 s, 9.2 MB/s 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.171 13:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.433 13:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.694 13:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:09.954 13:29:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:09.954 13:29:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:10.526 13:29:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:11.096 [2024-11-20 13:29:10.511086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.358 [2024-11-20 13:29:10.635750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.358 [2024-11-20 13:29:10.635945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.358 [2024-11-20 13:29:10.779635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:11.358 [2024-11-20 13:29:10.779714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:13.310 13:29:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:13.310 13:29:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:13.310 spdk_app_start Round 1 00:09:13.310 13:29:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58637 /var/tmp/spdk-nbd.sock 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58637 ']' 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.310 13:29:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:13.569 13:29:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.569 13:29:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:13.569 13:29:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:13.829 Malloc0 00:09:14.091 13:29:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.092 Malloc1 00:09:14.353 13:29:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.353 13:29:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:14.616 /dev/nbd0 00:09:14.616 13:29:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.616 13:29:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.616 1+0 records in 00:09:14.616 1+0 records out 00:09:14.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033686 s, 12.2 MB/s 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:14.616 13:29:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:14.616 13:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.616 13:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.616 13:29:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:14.876 /dev/nbd1 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.876 1+0 records in 00:09:14.876 1+0 records out 00:09:14.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283444 s, 14.5 MB/s 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:14.876 13:29:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.876 13:29:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.135 { 00:09:15.135 "nbd_device": "/dev/nbd0", 00:09:15.135 "bdev_name": "Malloc0" 00:09:15.135 }, 00:09:15.135 { 00:09:15.135 "nbd_device": "/dev/nbd1", 00:09:15.135 "bdev_name": "Malloc1" 00:09:15.135 } 00:09:15.135 ]' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.135 { 00:09:15.135 "nbd_device": "/dev/nbd0", 00:09:15.135 "bdev_name": "Malloc0" 00:09:15.135 }, 00:09:15.135 { 00:09:15.135 "nbd_device": "/dev/nbd1", 00:09:15.135 "bdev_name": "Malloc1" 00:09:15.135 } 00:09:15.135 ]' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.135 /dev/nbd1' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.135 /dev/nbd1' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.135 13:29:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.136 256+0 records in 00:09:15.136 256+0 records out 00:09:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437279 s, 240 MB/s 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.136 256+0 records in 00:09:15.136 256+0 records out 00:09:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196653 s, 53.3 MB/s 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.136 256+0 records in 00:09:15.136 256+0 records out 00:09:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228801 s, 45.8 MB/s 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.136 13:29:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.395 13:29:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.655 13:29:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:15.915 13:29:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:15.915 13:29:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.176 13:29:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:16.791 [2024-11-20 13:29:16.199766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.051 [2024-11-20 13:29:16.300421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.051 [2024-11-20 13:29:16.300574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.051 [2024-11-20 13:29:16.425854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:17.051 [2024-11-20 13:29:16.425940] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.597 spdk_app_start Round 2 00:09:19.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.597 13:29:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:19.597 13:29:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:19.597 13:29:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58637 /var/tmp/spdk-nbd.sock 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58637 ']' 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.597 13:29:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:19.597 13:29:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.597 Malloc0 00:09:19.597 13:29:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.859 Malloc1 00:09:19.859 13:29:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.859 13:29:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:20.119 /dev/nbd0 00:09:20.119 13:29:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.119 13:29:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.119 1+0 records in 00:09:20.119 1+0 records out 00:09:20.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186093 s, 22.0 MB/s 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:20.119 13:29:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:20.119 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.119 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.119 13:29:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:20.461 /dev/nbd1 00:09:20.461 13:29:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.462 1+0 records in 00:09:20.462 1+0 records out 00:09:20.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636391 s, 6.4 MB/s 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:20.462 13:29:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.462 13:29:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.731 13:29:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:20.731 { 00:09:20.731 "nbd_device": "/dev/nbd0", 00:09:20.731 "bdev_name": "Malloc0" 00:09:20.731 }, 00:09:20.731 { 00:09:20.731 "nbd_device": "/dev/nbd1", 00:09:20.731 "bdev_name": "Malloc1" 00:09:20.731 } 00:09:20.731 ]' 00:09:20.731 13:29:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:20.731 { 00:09:20.731 "nbd_device": "/dev/nbd0", 00:09:20.731 "bdev_name": "Malloc0" 00:09:20.731 }, 00:09:20.731 { 00:09:20.731 "nbd_device": "/dev/nbd1", 00:09:20.731 "bdev_name": "Malloc1" 00:09:20.731 } 00:09:20.731 ]' 00:09:20.731 13:29:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.731 13:29:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:20.731 /dev/nbd1' 00:09:20.731 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:20.731 /dev/nbd1' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.732 256+0 records in 00:09:20.732 256+0 records out 00:09:20.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669008 s, 157 MB/s 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.732 256+0 records in 00:09:20.732 256+0 records out 00:09:20.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182466 s, 57.5 MB/s 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.732 256+0 records in 00:09:20.732 256+0 records out 00:09:20.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314925 s, 33.3 MB/s 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.732 13:29:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.994 13:29:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.255 13:29:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.516 13:29:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.516 13:29:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:22.085 13:29:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:23.030 [2024-11-20 13:29:22.087618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.030 [2024-11-20 13:29:22.227017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.030 [2024-11-20 13:29:22.227140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.030 [2024-11-20 13:29:22.382881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:23.030 [2024-11-20 13:29:22.383017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:25.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.035 13:29:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58637 /var/tmp/spdk-nbd.sock 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58637 ']' 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.035 13:29:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:25.297 13:29:24 event.app_repeat -- event/event.sh@39 -- # killprocess 58637 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58637 ']' 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58637 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58637 00:09:25.297 killing process with pid 58637 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58637' 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58637 00:09:25.297 13:29:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58637 00:09:26.238 spdk_app_start is called in Round 0. 00:09:26.238 Shutdown signal received, stop current app iteration 00:09:26.238 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:09:26.238 spdk_app_start is called in Round 1. 00:09:26.238 Shutdown signal received, stop current app iteration 00:09:26.238 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:09:26.238 spdk_app_start is called in Round 2. 00:09:26.238 Shutdown signal received, stop current app iteration 00:09:26.238 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:09:26.238 spdk_app_start is called in Round 3. 00:09:26.238 Shutdown signal received, stop current app iteration 00:09:26.238 ************************************ 00:09:26.238 END TEST app_repeat 00:09:26.238 ************************************ 00:09:26.238 13:29:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:26.238 13:29:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:26.238 00:09:26.238 real 0m19.284s 00:09:26.238 user 0m41.912s 00:09:26.238 sys 0m2.575s 00:09:26.238 13:29:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.238 13:29:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 13:29:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:26.238 13:29:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:26.238 13:29:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.238 13:29:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.238 13:29:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 ************************************ 00:09:26.238 START TEST cpu_locks 00:09:26.238 ************************************ 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:26.238 * Looking for test storage... 00:09:26.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.238 13:29:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.238 --rc genhtml_branch_coverage=1 00:09:26.238 --rc genhtml_function_coverage=1 00:09:26.238 --rc genhtml_legend=1 00:09:26.238 --rc geninfo_all_blocks=1 00:09:26.238 --rc geninfo_unexecuted_blocks=1 00:09:26.238 00:09:26.238 ' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.238 --rc genhtml_branch_coverage=1 00:09:26.238 --rc genhtml_function_coverage=1 00:09:26.238 --rc genhtml_legend=1 00:09:26.238 --rc geninfo_all_blocks=1 00:09:26.238 --rc geninfo_unexecuted_blocks=1 00:09:26.238 00:09:26.238 ' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.238 --rc genhtml_branch_coverage=1 00:09:26.238 --rc genhtml_function_coverage=1 00:09:26.238 --rc genhtml_legend=1 00:09:26.238 --rc geninfo_all_blocks=1 00:09:26.238 --rc geninfo_unexecuted_blocks=1 00:09:26.238 00:09:26.238 ' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.238 --rc genhtml_branch_coverage=1 00:09:26.238 --rc genhtml_function_coverage=1 00:09:26.238 --rc genhtml_legend=1 00:09:26.238 --rc geninfo_all_blocks=1 00:09:26.238 --rc geninfo_unexecuted_blocks=1 00:09:26.238 00:09:26.238 ' 00:09:26.238 13:29:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:26.238 13:29:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:26.238 13:29:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:26.238 13:29:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.238 13:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 ************************************ 00:09:26.238 START TEST default_locks 00:09:26.238 ************************************ 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59084 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59084 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:26.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.238 13:29:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.496 [2024-11-20 13:29:25.673076] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:26.496 [2024-11-20 13:29:25.673216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:09:26.496 [2024-11-20 13:29:25.835591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.758 [2024-11-20 13:29:25.938507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.323 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.323 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:27.323 13:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59084 00:09:27.323 13:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59084 00:09:27.323 13:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59084 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59084 ']' 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59084 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59084 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.581 killing process with pid 59084 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59084' 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59084 00:09:27.581 13:29:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59084 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59084 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59084 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59084 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.480 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59084) - No such process 00:09:29.480 ERROR: process (pid: 59084) is no longer running 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:29.480 00:09:29.480 real 0m2.840s 00:09:29.480 user 0m2.845s 00:09:29.480 sys 0m0.526s 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.480 13:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.480 ************************************ 00:09:29.480 END TEST default_locks 00:09:29.480 ************************************ 00:09:29.480 13:29:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:29.480 13:29:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.480 13:29:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.480 13:29:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.480 ************************************ 00:09:29.480 START TEST default_locks_via_rpc 00:09:29.480 ************************************ 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59143 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59143 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59143 ']' 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.480 13:29:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:29.480 [2024-11-20 13:29:28.541907] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:29.480 [2024-11-20 13:29:28.542045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:09:29.481 [2024-11-20 13:29:28.700290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.481 [2024-11-20 13:29:28.802933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59143 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59143 00:09:30.046 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59143 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59143 ']' 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59143 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59143 00:09:30.303 killing process with pid 59143 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59143' 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59143 00:09:30.303 13:29:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59143 00:09:32.201 00:09:32.201 real 0m2.720s 00:09:32.201 user 0m2.731s 00:09:32.201 sys 0m0.441s 00:09:32.201 13:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.201 13:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.202 ************************************ 00:09:32.202 END TEST default_locks_via_rpc 00:09:32.202 ************************************ 00:09:32.202 13:29:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:32.202 13:29:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.202 13:29:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.202 13:29:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.202 ************************************ 00:09:32.202 START TEST non_locking_app_on_locked_coremask 00:09:32.202 ************************************ 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59206 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59206 /var/tmp/spdk.sock 00:09:32.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59206 ']' 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.202 13:29:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.202 [2024-11-20 13:29:31.297735] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:32.202 [2024-11-20 13:29:31.297860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:09:32.202 [2024-11-20 13:29:31.450728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.202 [2024-11-20 13:29:31.554152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.768 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.768 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.768 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59222 00:09:32.768 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59222 /var/tmp/spdk2.sock 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.769 13:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.026 [2024-11-20 13:29:32.236114] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:33.026 [2024-11-20 13:29:32.236377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:09:33.026 [2024-11-20 13:29:32.412529] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:33.026 [2024-11-20 13:29:32.412597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.283 [2024-11-20 13:29:32.612947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.654 13:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.654 13:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:34.654 13:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59206 00:09:34.654 13:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59206 00:09:34.654 13:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59206 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59206 ']' 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59206 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.912 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59206 00:09:34.913 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.913 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.913 killing process with pid 59206 00:09:34.913 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59206' 00:09:34.913 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59206 00:09:34.913 13:29:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59206 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59222 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59222 ']' 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59222 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59222 00:09:38.192 killing process with pid 59222 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59222' 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59222 00:09:38.192 13:29:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59222 00:09:39.125 ************************************ 00:09:39.125 END TEST non_locking_app_on_locked_coremask 00:09:39.125 ************************************ 00:09:39.125 00:09:39.125 real 0m7.247s 00:09:39.125 user 0m7.477s 00:09:39.125 sys 0m0.896s 00:09:39.125 13:29:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.125 13:29:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 13:29:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:39.125 13:29:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.125 13:29:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.125 13:29:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 ************************************ 00:09:39.125 START TEST locking_app_on_unlocked_coremask 00:09:39.125 ************************************ 00:09:39.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59324 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59324 /var/tmp/spdk.sock 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59324 ']' 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.125 13:29:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:39.384 [2024-11-20 13:29:38.572421] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:39.384 [2024-11-20 13:29:38.572524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59324 ] 00:09:39.384 [2024-11-20 13:29:38.733213] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.384 [2024-11-20 13:29:38.733434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.643 [2024-11-20 13:29:38.835654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59340 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59340 /var/tmp/spdk2.sock 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.210 13:29:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.210 [2024-11-20 13:29:39.535442] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:40.210 [2024-11-20 13:29:39.535820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:09:40.470 [2024-11-20 13:29:39.727536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.730 [2024-11-20 13:29:39.934014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59340 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59340 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59324 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59324 ']' 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59324 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59324 00:09:42.105 killing process with pid 59324 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59324' 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59324 00:09:42.105 13:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59324 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59340 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59340 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59340 00:09:45.382 killing process with pid 59340 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59340' 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59340 00:09:45.382 13:29:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59340 00:09:46.318 ************************************ 00:09:46.318 END TEST locking_app_on_unlocked_coremask 00:09:46.318 ************************************ 00:09:46.318 00:09:46.318 real 0m7.007s 00:09:46.318 user 0m7.297s 00:09:46.318 sys 0m0.879s 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:46.318 13:29:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:46.318 13:29:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.318 13:29:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.318 13:29:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:46.318 ************************************ 00:09:46.318 START TEST locking_app_on_locked_coremask 00:09:46.318 ************************************ 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59442 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59442 /var/tmp/spdk.sock 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:46.318 13:29:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:46.318 [2024-11-20 13:29:45.624083] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:46.318 [2024-11-20 13:29:45.624187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:09:46.577 [2024-11-20 13:29:45.775472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.577 [2024-11-20 13:29:45.863182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59458 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59458 /var/tmp/spdk2.sock 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59458 /var/tmp/spdk2.sock 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59458 /var/tmp/spdk2.sock 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59458 ']' 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:47.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.144 13:29:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.402 [2024-11-20 13:29:46.569494] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:47.402 [2024-11-20 13:29:46.569781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:09:47.402 [2024-11-20 13:29:46.733236] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59442 has claimed it. 00:09:47.402 [2024-11-20 13:29:46.733304] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:47.968 ERROR: process (pid: 59458) is no longer running 00:09:47.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59458) - No such process 00:09:47.968 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.968 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:47.968 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59442 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:47.969 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59442 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59442 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59442 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59442 00:09:48.227 killing process with pid 59442 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59442' 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59442 00:09:48.227 13:29:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59442 00:09:49.684 00:09:49.684 real 0m3.210s 00:09:49.684 user 0m3.533s 00:09:49.684 sys 0m0.552s 00:09:49.684 ************************************ 00:09:49.684 END TEST locking_app_on_locked_coremask 00:09:49.684 ************************************ 00:09:49.684 13:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.684 13:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.684 13:29:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:49.684 13:29:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.684 13:29:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.684 13:29:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.684 ************************************ 00:09:49.684 START TEST locking_overlapped_coremask 00:09:49.684 ************************************ 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:49.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59511 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59511 /var/tmp/spdk.sock 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59511 ']' 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.684 13:29:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.684 [2024-11-20 13:29:48.883228] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:49.684 [2024-11-20 13:29:48.883636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:09:49.684 [2024-11-20 13:29:49.032203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.940 [2024-11-20 13:29:49.120197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.940 [2024-11-20 13:29:49.120819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.940 [2024-11-20 13:29:49.120841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59529 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59529 /var/tmp/spdk2.sock 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59529 /var/tmp/spdk2.sock 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59529 /var/tmp/spdk2.sock 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59529 ']' 00:09:50.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.505 13:29:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 [2024-11-20 13:29:49.786225] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:50.505 [2024-11-20 13:29:49.786400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59529 ] 00:09:50.762 [2024-11-20 13:29:49.968171] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59511 has claimed it. 00:09:50.762 [2024-11-20 13:29:49.968241] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:51.021 ERROR: process (pid: 59529) is no longer running 00:09:51.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59529) - No such process 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59511 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59511 ']' 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59511 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59511 00:09:51.021 killing process with pid 59511 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59511' 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59511 00:09:51.021 13:29:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59511 00:09:52.392 00:09:52.392 real 0m2.854s 00:09:52.392 user 0m7.760s 00:09:52.392 sys 0m0.439s 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.392 ************************************ 00:09:52.392 END TEST locking_overlapped_coremask 00:09:52.392 ************************************ 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.392 13:29:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:52.392 13:29:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.392 13:29:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.392 13:29:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.392 ************************************ 00:09:52.392 START TEST locking_overlapped_coremask_via_rpc 00:09:52.392 ************************************ 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59582 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59582 /var/tmp/spdk.sock 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59582 ']' 00:09:52.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.392 13:29:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.392 [2024-11-20 13:29:51.767518] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:52.392 [2024-11-20 13:29:51.768167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:09:52.651 [2024-11-20 13:29:51.919553] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:52.651 [2024-11-20 13:29:51.919752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:52.651 [2024-11-20 13:29:52.007087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.651 [2024-11-20 13:29:52.007841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.651 [2024-11-20 13:29:52.007860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59600 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59600 /var/tmp/spdk2.sock 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59600 ']' 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:53.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.215 13:29:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.473 [2024-11-20 13:29:52.691958] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:53.473 [2024-11-20 13:29:52.692433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:09:53.473 [2024-11-20 13:29:52.858488] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:53.473 [2024-11-20 13:29:52.861983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.730 [2024-11-20 13:29:53.040360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.730 [2024-11-20 13:29:53.040622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.730 [2024-11-20 13:29:53.040574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.663 [2024-11-20 13:29:54.047132] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59582 has claimed it. 00:09:54.663 request: 00:09:54.663 { 00:09:54.663 "method": "framework_enable_cpumask_locks", 00:09:54.663 "req_id": 1 00:09:54.663 } 00:09:54.663 Got JSON-RPC error response 00:09:54.663 response: 00:09:54.663 { 00:09:54.663 "code": -32603, 00:09:54.663 "message": "Failed to claim CPU core: 2" 00:09:54.663 } 00:09:54.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59582 /var/tmp/spdk.sock 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59582 ']' 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.663 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59600 /var/tmp/spdk2.sock 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59600 ']' 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.921 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.178 ************************************ 00:09:55.178 END TEST locking_overlapped_coremask_via_rpc 00:09:55.178 ************************************ 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:55.178 00:09:55.178 real 0m2.825s 00:09:55.178 user 0m1.142s 00:09:55.178 sys 0m0.135s 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.178 13:29:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.178 13:29:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:55.178 13:29:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59582 ]] 00:09:55.178 13:29:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59582 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59582 ']' 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59582 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59582 00:09:55.178 killing process with pid 59582 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59582' 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59582 00:09:55.178 13:29:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59582 00:09:56.568 13:29:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59600 ]] 00:09:56.568 13:29:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59600 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59600 ']' 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59600 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59600 00:09:56.568 killing process with pid 59600 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59600' 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59600 00:09:56.568 13:29:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59600 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:57.980 Process with pid 59582 is not found 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59582 ]] 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59582 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59582 ']' 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59582 00:09:57.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59582) - No such process 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59582 is not found' 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59600 ]] 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59600 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59600 ']' 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59600 00:09:57.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59600) - No such process 00:09:57.980 Process with pid 59600 is not found 00:09:57.980 13:29:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59600 is not found' 00:09:57.980 13:29:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:57.980 ************************************ 00:09:57.980 END TEST cpu_locks 00:09:57.980 ************************************ 00:09:57.980 00:09:57.981 real 0m31.767s 00:09:57.981 user 0m53.260s 00:09:57.981 sys 0m4.684s 00:09:57.981 13:29:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.981 13:29:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.981 ************************************ 00:09:57.981 END TEST event 00:09:57.981 ************************************ 00:09:57.981 00:09:57.981 real 1m2.378s 00:09:57.981 user 1m54.861s 00:09:57.981 sys 0m8.327s 00:09:57.981 13:29:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.981 13:29:57 event -- common/autotest_common.sh@10 -- # set +x 00:09:57.981 13:29:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:57.981 13:29:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.981 13:29:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.981 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:09:57.981 ************************************ 00:09:57.981 START TEST thread 00:09:57.981 ************************************ 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:57.981 * Looking for test storage... 00:09:57.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.981 13:29:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.981 13:29:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.981 13:29:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.981 13:29:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.981 13:29:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.981 13:29:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.981 13:29:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.981 13:29:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.981 13:29:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.981 13:29:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.981 13:29:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.981 13:29:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:57.981 13:29:57 thread -- scripts/common.sh@345 -- # : 1 00:09:57.981 13:29:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.981 13:29:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.981 13:29:57 thread -- scripts/common.sh@365 -- # decimal 1 00:09:57.981 13:29:57 thread -- scripts/common.sh@353 -- # local d=1 00:09:57.981 13:29:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.981 13:29:57 thread -- scripts/common.sh@355 -- # echo 1 00:09:57.981 13:29:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.981 13:29:57 thread -- scripts/common.sh@366 -- # decimal 2 00:09:57.981 13:29:57 thread -- scripts/common.sh@353 -- # local d=2 00:09:57.981 13:29:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.981 13:29:57 thread -- scripts/common.sh@355 -- # echo 2 00:09:57.981 13:29:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.981 13:29:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.981 13:29:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.981 13:29:57 thread -- scripts/common.sh@368 -- # return 0 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.981 --rc genhtml_branch_coverage=1 00:09:57.981 --rc genhtml_function_coverage=1 00:09:57.981 --rc genhtml_legend=1 00:09:57.981 --rc geninfo_all_blocks=1 00:09:57.981 --rc geninfo_unexecuted_blocks=1 00:09:57.981 00:09:57.981 ' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.981 --rc genhtml_branch_coverage=1 00:09:57.981 --rc genhtml_function_coverage=1 00:09:57.981 --rc genhtml_legend=1 00:09:57.981 --rc geninfo_all_blocks=1 00:09:57.981 --rc geninfo_unexecuted_blocks=1 00:09:57.981 00:09:57.981 ' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.981 --rc genhtml_branch_coverage=1 00:09:57.981 --rc genhtml_function_coverage=1 00:09:57.981 --rc genhtml_legend=1 00:09:57.981 --rc geninfo_all_blocks=1 00:09:57.981 --rc geninfo_unexecuted_blocks=1 00:09:57.981 00:09:57.981 ' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.981 --rc genhtml_branch_coverage=1 00:09:57.981 --rc genhtml_function_coverage=1 00:09:57.981 --rc genhtml_legend=1 00:09:57.981 --rc geninfo_all_blocks=1 00:09:57.981 --rc geninfo_unexecuted_blocks=1 00:09:57.981 00:09:57.981 ' 00:09:57.981 13:29:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.981 13:29:57 thread -- common/autotest_common.sh@10 -- # set +x 00:09:57.981 ************************************ 00:09:57.981 START TEST thread_poller_perf 00:09:57.981 ************************************ 00:09:57.981 13:29:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:58.238 [2024-11-20 13:29:57.421882] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:58.239 [2024-11-20 13:29:57.421992] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59749 ] 00:09:58.239 [2024-11-20 13:29:57.575807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.496 [2024-11-20 13:29:57.677055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.496 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:59.433 [2024-11-20T13:29:58.860Z] ====================================== 00:09:59.433 [2024-11-20T13:29:58.860Z] busy:2612332730 (cyc) 00:09:59.433 [2024-11-20T13:29:58.860Z] total_run_count: 306000 00:09:59.433 [2024-11-20T13:29:58.860Z] tsc_hz: 2600000000 (cyc) 00:09:59.433 [2024-11-20T13:29:58.860Z] ====================================== 00:09:59.433 [2024-11-20T13:29:58.861Z] poller_cost: 8537 (cyc), 3283 (nsec) 00:09:59.434 00:09:59.434 real 0m1.443s 00:09:59.434 user 0m1.275s 00:09:59.434 sys 0m0.059s 00:09:59.434 13:29:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.434 13:29:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:59.434 ************************************ 00:09:59.434 END TEST thread_poller_perf 00:09:59.434 ************************************ 00:09:59.708 13:29:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:59.708 13:29:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:59.708 13:29:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.708 13:29:58 thread -- common/autotest_common.sh@10 -- # set +x 00:09:59.708 ************************************ 00:09:59.708 START TEST thread_poller_perf 00:09:59.708 ************************************ 00:09:59.708 13:29:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:59.708 [2024-11-20 13:29:58.920198] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:59.708 [2024-11-20 13:29:58.920472] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:09:59.708 [2024-11-20 13:29:59.080088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.965 [2024-11-20 13:29:59.181200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.965 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:01.337 [2024-11-20T13:30:00.764Z] ====================================== 00:10:01.337 [2024-11-20T13:30:00.764Z] busy:2603534994 (cyc) 00:10:01.337 [2024-11-20T13:30:00.764Z] total_run_count: 3931000 00:10:01.337 [2024-11-20T13:30:00.764Z] tsc_hz: 2600000000 (cyc) 00:10:01.337 [2024-11-20T13:30:00.764Z] ====================================== 00:10:01.337 [2024-11-20T13:30:00.764Z] poller_cost: 662 (cyc), 254 (nsec) 00:10:01.337 00:10:01.337 real 0m1.450s 00:10:01.337 user 0m1.265s 00:10:01.337 sys 0m0.076s 00:10:01.337 13:30:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.337 ************************************ 00:10:01.337 13:30:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:01.337 END TEST thread_poller_perf 00:10:01.337 ************************************ 00:10:01.337 13:30:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:01.337 00:10:01.337 real 0m3.127s 00:10:01.337 user 0m2.655s 00:10:01.337 sys 0m0.251s 00:10:01.337 ************************************ 00:10:01.337 END TEST thread 00:10:01.337 ************************************ 00:10:01.337 13:30:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.337 13:30:00 thread -- common/autotest_common.sh@10 -- # set +x 00:10:01.337 13:30:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:01.337 13:30:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:01.337 13:30:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.337 13:30:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.337 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:10:01.337 ************************************ 00:10:01.337 START TEST app_cmdline 00:10:01.337 ************************************ 00:10:01.337 13:30:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:01.337 * Looking for test storage... 00:10:01.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:01.337 13:30:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.337 13:30:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.337 13:30:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.337 13:30:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.337 13:30:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.338 --rc genhtml_branch_coverage=1 00:10:01.338 --rc genhtml_function_coverage=1 00:10:01.338 --rc genhtml_legend=1 00:10:01.338 --rc geninfo_all_blocks=1 00:10:01.338 --rc geninfo_unexecuted_blocks=1 00:10:01.338 00:10:01.338 ' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.338 --rc genhtml_branch_coverage=1 00:10:01.338 --rc genhtml_function_coverage=1 00:10:01.338 --rc genhtml_legend=1 00:10:01.338 --rc geninfo_all_blocks=1 00:10:01.338 --rc geninfo_unexecuted_blocks=1 00:10:01.338 00:10:01.338 ' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.338 --rc genhtml_branch_coverage=1 00:10:01.338 --rc genhtml_function_coverage=1 00:10:01.338 --rc genhtml_legend=1 00:10:01.338 --rc geninfo_all_blocks=1 00:10:01.338 --rc geninfo_unexecuted_blocks=1 00:10:01.338 00:10:01.338 ' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.338 --rc genhtml_branch_coverage=1 00:10:01.338 --rc genhtml_function_coverage=1 00:10:01.338 --rc genhtml_legend=1 00:10:01.338 --rc geninfo_all_blocks=1 00:10:01.338 --rc geninfo_unexecuted_blocks=1 00:10:01.338 00:10:01.338 ' 00:10:01.338 13:30:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:01.338 13:30:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59875 00:10:01.338 13:30:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59875 00:10:01.338 13:30:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59875 ']' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.338 13:30:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:01.338 [2024-11-20 13:30:00.607625] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:01.338 [2024-11-20 13:30:00.607874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:10:01.594 [2024-11-20 13:30:00.763358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.594 [2024-11-20 13:30:00.862765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.231 13:30:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.231 13:30:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:02.231 { 00:10:02.231 "version": "SPDK v25.01-pre git sha1 82b85d9ca", 00:10:02.231 "fields": { 00:10:02.231 "major": 25, 00:10:02.231 "minor": 1, 00:10:02.231 "patch": 0, 00:10:02.231 "suffix": "-pre", 00:10:02.231 "commit": "82b85d9ca" 00:10:02.231 } 00:10:02.231 } 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:02.231 13:30:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:02.231 13:30:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:02.231 13:30:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.489 13:30:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:02.489 13:30:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:02.489 13:30:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:02.489 request: 00:10:02.489 { 00:10:02.489 "method": "env_dpdk_get_mem_stats", 00:10:02.489 "req_id": 1 00:10:02.489 } 00:10:02.489 Got JSON-RPC error response 00:10:02.489 response: 00:10:02.489 { 00:10:02.489 "code": -32601, 00:10:02.489 "message": "Method not found" 00:10:02.489 } 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.489 13:30:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59875 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59875 ']' 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59875 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59875 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59875' 00:10:02.489 killing process with pid 59875 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 59875 00:10:02.489 13:30:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 59875 00:10:04.388 ************************************ 00:10:04.388 END TEST app_cmdline 00:10:04.388 ************************************ 00:10:04.388 00:10:04.388 real 0m3.009s 00:10:04.388 user 0m3.264s 00:10:04.388 sys 0m0.391s 00:10:04.388 13:30:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.388 13:30:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:04.388 13:30:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:04.388 13:30:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.388 13:30:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.388 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:10:04.388 ************************************ 00:10:04.388 START TEST version 00:10:04.388 ************************************ 00:10:04.388 13:30:03 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:04.388 * Looking for test storage... 00:10:04.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:04.388 13:30:03 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.388 13:30:03 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.388 13:30:03 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.388 13:30:03 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.388 13:30:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.388 13:30:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.388 13:30:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.388 13:30:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.388 13:30:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.388 13:30:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.388 13:30:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.388 13:30:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.388 13:30:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.388 13:30:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.388 13:30:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.388 13:30:03 version -- scripts/common.sh@344 -- # case "$op" in 00:10:04.388 13:30:03 version -- scripts/common.sh@345 -- # : 1 00:10:04.388 13:30:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.388 13:30:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.388 13:30:03 version -- scripts/common.sh@365 -- # decimal 1 00:10:04.388 13:30:03 version -- scripts/common.sh@353 -- # local d=1 00:10:04.388 13:30:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.388 13:30:03 version -- scripts/common.sh@355 -- # echo 1 00:10:04.388 13:30:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.388 13:30:03 version -- scripts/common.sh@366 -- # decimal 2 00:10:04.389 13:30:03 version -- scripts/common.sh@353 -- # local d=2 00:10:04.389 13:30:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.389 13:30:03 version -- scripts/common.sh@355 -- # echo 2 00:10:04.389 13:30:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.389 13:30:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.389 13:30:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.389 13:30:03 version -- scripts/common.sh@368 -- # return 0 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.389 --rc genhtml_branch_coverage=1 00:10:04.389 --rc genhtml_function_coverage=1 00:10:04.389 --rc genhtml_legend=1 00:10:04.389 --rc geninfo_all_blocks=1 00:10:04.389 --rc geninfo_unexecuted_blocks=1 00:10:04.389 00:10:04.389 ' 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.389 --rc genhtml_branch_coverage=1 00:10:04.389 --rc genhtml_function_coverage=1 00:10:04.389 --rc genhtml_legend=1 00:10:04.389 --rc geninfo_all_blocks=1 00:10:04.389 --rc geninfo_unexecuted_blocks=1 00:10:04.389 00:10:04.389 ' 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.389 --rc genhtml_branch_coverage=1 00:10:04.389 --rc genhtml_function_coverage=1 00:10:04.389 --rc genhtml_legend=1 00:10:04.389 --rc geninfo_all_blocks=1 00:10:04.389 --rc geninfo_unexecuted_blocks=1 00:10:04.389 00:10:04.389 ' 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.389 --rc genhtml_branch_coverage=1 00:10:04.389 --rc genhtml_function_coverage=1 00:10:04.389 --rc genhtml_legend=1 00:10:04.389 --rc geninfo_all_blocks=1 00:10:04.389 --rc geninfo_unexecuted_blocks=1 00:10:04.389 00:10:04.389 ' 00:10:04.389 13:30:03 version -- app/version.sh@17 -- # get_header_version major 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # cut -f2 00:10:04.389 13:30:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:04.389 13:30:03 version -- app/version.sh@17 -- # major=25 00:10:04.389 13:30:03 version -- app/version.sh@18 -- # get_header_version minor 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:04.389 13:30:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # cut -f2 00:10:04.389 13:30:03 version -- app/version.sh@18 -- # minor=1 00:10:04.389 13:30:03 version -- app/version.sh@19 -- # get_header_version patch 00:10:04.389 13:30:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # cut -f2 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:04.389 13:30:03 version -- app/version.sh@19 -- # patch=0 00:10:04.389 13:30:03 version -- app/version.sh@20 -- # get_header_version suffix 00:10:04.389 13:30:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # cut -f2 00:10:04.389 13:30:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:04.389 13:30:03 version -- app/version.sh@20 -- # suffix=-pre 00:10:04.389 13:30:03 version -- app/version.sh@22 -- # version=25.1 00:10:04.389 13:30:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:04.389 13:30:03 version -- app/version.sh@28 -- # version=25.1rc0 00:10:04.389 13:30:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:04.389 13:30:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:04.389 13:30:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:04.389 13:30:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:04.389 00:10:04.389 real 0m0.218s 00:10:04.389 user 0m0.148s 00:10:04.389 sys 0m0.098s 00:10:04.389 ************************************ 00:10:04.389 END TEST version 00:10:04.389 ************************************ 00:10:04.389 13:30:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.389 13:30:03 version -- common/autotest_common.sh@10 -- # set +x 00:10:04.389 13:30:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:04.389 13:30:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:04.389 13:30:03 -- spdk/autotest.sh@194 -- # uname -s 00:10:04.389 13:30:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:04.389 13:30:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:04.389 13:30:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:04.389 13:30:03 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:04.389 13:30:03 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:04.389 13:30:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.389 13:30:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.389 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:10:04.389 ************************************ 00:10:04.389 START TEST blockdev_nvme 00:10:04.389 ************************************ 00:10:04.389 13:30:03 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:04.389 * Looking for test storage... 00:10:04.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:04.389 13:30:03 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.389 13:30:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.389 13:30:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.647 13:30:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.647 --rc genhtml_branch_coverage=1 00:10:04.647 --rc genhtml_function_coverage=1 00:10:04.647 --rc genhtml_legend=1 00:10:04.647 --rc geninfo_all_blocks=1 00:10:04.647 --rc geninfo_unexecuted_blocks=1 00:10:04.647 00:10:04.647 ' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.647 --rc genhtml_branch_coverage=1 00:10:04.647 --rc genhtml_function_coverage=1 00:10:04.647 --rc genhtml_legend=1 00:10:04.647 --rc geninfo_all_blocks=1 00:10:04.647 --rc geninfo_unexecuted_blocks=1 00:10:04.647 00:10:04.647 ' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.647 --rc genhtml_branch_coverage=1 00:10:04.647 --rc genhtml_function_coverage=1 00:10:04.647 --rc genhtml_legend=1 00:10:04.647 --rc geninfo_all_blocks=1 00:10:04.647 --rc geninfo_unexecuted_blocks=1 00:10:04.647 00:10:04.647 ' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.647 --rc genhtml_branch_coverage=1 00:10:04.647 --rc genhtml_function_coverage=1 00:10:04.647 --rc genhtml_legend=1 00:10:04.647 --rc geninfo_all_blocks=1 00:10:04.647 --rc geninfo_unexecuted_blocks=1 00:10:04.647 00:10:04.647 ' 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:04.647 13:30:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60052 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60052 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60052 ']' 00:10:04.647 13:30:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:04.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.647 13:30:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 [2024-11-20 13:30:03.959992] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:04.648 [2024-11-20 13:30:03.960271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:10:04.905 [2024-11-20 13:30:04.122956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.905 [2024-11-20 13:30:04.222168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.472 13:30:04 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.472 13:30:04 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:05.472 13:30:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:05.472 13:30:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.472 13:30:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.730 13:30:05 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.730 13:30:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:05.730 13:30:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.730 13:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.989 13:30:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:05.989 13:30:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:05.990 13:30:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dd0debba-c02e-408b-81bc-e78915eb7e92"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dd0debba-c02e-408b-81bc-e78915eb7e92",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d7f6d08e-5512-40ec-968f-b6b4d8d63a2d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d7f6d08e-5512-40ec-968f-b6b4d8d63a2d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "11489916-525a-4d6e-8617-e0d6af937848"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11489916-525a-4d6e-8617-e0d6af937848",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "00e6db03-8cd9-45b4-bab7-04f47c0d09d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "00e6db03-8cd9-45b4-bab7-04f47c0d09d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0d6e24cc-ce51-4a98-b4dd-bb67367aa31a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d6e24cc-ce51-4a98-b4dd-bb67367aa31a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b62750f7-ee78-4426-9e11-7e8efcdb1280"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b62750f7-ee78-4426-9e11-7e8efcdb1280",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:05.990 13:30:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:05.990 13:30:05 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:05.990 13:30:05 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:05.990 13:30:05 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60052 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60052 ']' 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60052 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60052 00:10:05.990 killing process with pid 60052 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60052' 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60052 00:10:05.990 13:30:05 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60052 00:10:07.437 13:30:06 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:07.437 13:30:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:07.437 13:30:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:07.437 13:30:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.437 13:30:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.437 ************************************ 00:10:07.437 START TEST bdev_hello_world 00:10:07.437 ************************************ 00:10:07.437 13:30:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:07.694 [2024-11-20 13:30:06.904728] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:07.694 [2024-11-20 13:30:06.905056] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60131 ] 00:10:07.694 [2024-11-20 13:30:07.067088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.951 [2024-11-20 13:30:07.169718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.517 [2024-11-20 13:30:07.710282] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:08.517 [2024-11-20 13:30:07.710338] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:08.517 [2024-11-20 13:30:07.710360] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:08.517 [2024-11-20 13:30:07.712893] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:08.517 [2024-11-20 13:30:07.745063] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:08.517 [2024-11-20 13:30:07.745125] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:08.517 [2024-11-20 13:30:07.745277] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:08.517 00:10:08.517 [2024-11-20 13:30:07.745308] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:09.579 ************************************ 00:10:09.579 END TEST bdev_hello_world 00:10:09.579 ************************************ 00:10:09.579 00:10:09.579 real 0m1.725s 00:10:09.579 user 0m1.428s 00:10:09.579 sys 0m0.188s 00:10:09.579 13:30:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.579 13:30:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:09.579 13:30:08 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:09.579 13:30:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.579 13:30:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.579 13:30:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.579 ************************************ 00:10:09.579 START TEST bdev_bounds 00:10:09.579 ************************************ 00:10:09.579 Process bdevio pid: 60167 00:10:09.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60167 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60167' 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60167 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60167 ']' 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:09.579 13:30:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:09.579 [2024-11-20 13:30:08.661876] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:09.579 [2024-11-20 13:30:08.662032] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60167 ] 00:10:09.579 [2024-11-20 13:30:08.820097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.579 [2024-11-20 13:30:08.926528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.579 [2024-11-20 13:30:08.926605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.579 [2024-11-20 13:30:08.926994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.151 13:30:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.151 13:30:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:10.151 13:30:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:10.409 I/O targets: 00:10:10.409 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:10.409 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:10.409 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:10.409 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:10.409 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:10.409 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:10.409 00:10:10.409 00:10:10.409 CUnit - A unit testing framework for C - Version 2.1-3 00:10:10.409 http://cunit.sourceforge.net/ 00:10:10.409 00:10:10.409 00:10:10.409 Suite: bdevio tests on: Nvme3n1 00:10:10.409 Test: blockdev write read block ...passed 00:10:10.409 Test: blockdev write zeroes read block ...passed 00:10:10.409 Test: blockdev write zeroes read no split ...passed 00:10:10.409 Test: blockdev write zeroes read split ...passed 00:10:10.409 Test: blockdev write zeroes read split partial ...passed 00:10:10.409 Test: blockdev reset ...[2024-11-20 13:30:09.710363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:10.409 [2024-11-20 13:30:09.713375] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:10:10.409 Test: blockdev write read 8 blocks ...uccessful. 00:10:10.409 passed 00:10:10.409 Test: blockdev write read size > 128k ...passed 00:10:10.409 Test: blockdev write read invalid size ...passed 00:10:10.409 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.409 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.409 Test: blockdev write read max offset ...passed 00:10:10.409 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.409 Test: blockdev writev readv 8 blocks ...passed 00:10:10.409 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.409 Test: blockdev writev readv block ...passed 00:10:10.409 Test: blockdev writev readv size > 128k ...passed 00:10:10.409 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.409 Test: blockdev comparev and writev ...[2024-11-20 13:30:09.722288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf40a000 len:0x1000 00:10:10.409 passed 00:10:10.409 Test: blockdev nvme passthru rw ...[2024-11-20 13:30:09.722837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:10.409 passed 00:10:10.409 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:09.723855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:10.409 passed 00:10:10.409 Test: blockdev nvme admin passthru ...[2024-11-20 13:30:09.724159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:10.409 passed 00:10:10.409 Test: blockdev copy ...passed 00:10:10.409 Suite: bdevio tests on: Nvme2n3 00:10:10.409 Test: blockdev write read block ...passed 00:10:10.409 Test: blockdev write zeroes read block ...passed 00:10:10.409 Test: blockdev write zeroes read no split ...passed 00:10:10.667 Test: blockdev write zeroes read split ...passed 00:10:10.667 Test: blockdev write zeroes read split partial ...passed 00:10:10.667 Test: blockdev reset ...[2024-11-20 13:30:09.924488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:10.667 [2024-11-20 13:30:09.927962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:10.667 passed 00:10:10.667 Test: blockdev write read 8 blocks ...passed 00:10:10.667 Test: blockdev write read size > 128k ...passed 00:10:10.667 Test: blockdev write read invalid size ...passed 00:10:10.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.667 Test: blockdev write read max offset ...passed 00:10:10.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.667 Test: blockdev writev readv 8 blocks ...passed 00:10:10.667 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.667 Test: blockdev writev readv block ...passed 00:10:10.667 Test: blockdev writev readv size > 128k ...passed 00:10:10.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.667 Test: blockdev comparev and writev ...[2024-11-20 13:30:09.935802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2606000 len:0x1000 00:10:10.667 [2024-11-20 13:30:09.936094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:10.667 passed 00:10:10.667 Test: blockdev nvme passthru rw ...passed 00:10:10.667 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:09.936892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:10.667 passed 00:10:10.667 Test: blockdev nvme admin passthru ...[2024-11-20 13:30:09.937052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:10.667 passed 00:10:10.667 Test: blockdev copy ...passed 00:10:10.667 Suite: bdevio tests on: Nvme2n2 00:10:10.667 Test: blockdev write read block ...passed 00:10:10.667 Test: blockdev write zeroes read block ...passed 00:10:10.667 Test: blockdev write zeroes read no split ...passed 00:10:10.667 Test: blockdev write zeroes read split ...passed 00:10:10.667 Test: blockdev write zeroes read split partial ...passed 00:10:10.667 Test: blockdev reset ...[2024-11-20 13:30:10.052964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:10.668 [2024-11-20 13:30:10.056021] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:10.668 passed 00:10:10.668 Test: blockdev write read 8 blocks ...passed 00:10:10.668 Test: blockdev write read size > 128k ...passed 00:10:10.668 Test: blockdev write read invalid size ...passed 00:10:10.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.668 Test: blockdev write read max offset ...passed 00:10:10.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.668 Test: blockdev writev readv 8 blocks ...passed 00:10:10.668 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.668 Test: blockdev writev readv block ...passed 00:10:10.668 Test: blockdev writev readv size > 128k ...passed 00:10:10.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.668 Test: blockdev comparev and writev ...[2024-11-20 13:30:10.063101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf43c000 len:0x1000 00:10:10.668 [2024-11-20 13:30:10.063346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:10.668 passed 00:10:10.668 Test: blockdev nvme passthru rw ...passed 00:10:10.668 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:10.064661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:10.668 [2024-11-20 13:30:10.064931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:10.668 passed 00:10:10.668 Test: blockdev nvme admin passthru ...passed 00:10:10.668 Test: blockdev copy ...passed 00:10:10.668 Suite: bdevio tests on: Nvme2n1 00:10:10.668 Test: blockdev write read block ...passed 00:10:10.668 Test: blockdev write zeroes read block ...passed 00:10:10.668 Test: blockdev write zeroes read no split ...passed 00:10:10.926 Test: blockdev write zeroes read split ...passed 00:10:10.926 Test: blockdev write zeroes read split partial ...passed 00:10:10.926 Test: blockdev reset ...[2024-11-20 13:30:10.125895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:10.926 [2024-11-20 13:30:10.128904] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:10.926 passed 00:10:10.926 Test: blockdev write read 8 blocks ...passed 00:10:10.926 Test: blockdev write read size > 128k ...passed 00:10:10.926 Test: blockdev write read invalid size ...passed 00:10:10.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.926 Test: blockdev write read max offset ...passed 00:10:10.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.926 Test: blockdev writev readv 8 blocks ...passed 00:10:10.926 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.926 Test: blockdev writev readv block ...passed 00:10:10.926 Test: blockdev writev readv size > 128k ...passed 00:10:10.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.926 Test: blockdev comparev and writev ...[2024-11-20 13:30:10.134843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf438000 len:0x1000 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme passthru rw ...[2024-11-20 13:30:10.135022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:10.135513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:10.926 [2024-11-20 13:30:10.135540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme admin passthru ...passed 00:10:10.926 Test: blockdev copy ...passed 00:10:10.926 Suite: bdevio tests on: Nvme1n1 00:10:10.926 Test: blockdev write read block ...passed 00:10:10.926 Test: blockdev write zeroes read block ...passed 00:10:10.926 Test: blockdev write zeroes read no split ...passed 00:10:10.926 Test: blockdev write zeroes read split ...passed 00:10:10.926 Test: blockdev write zeroes read split partial ...passed 00:10:10.926 Test: blockdev reset ...[2024-11-20 13:30:10.182445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:10.926 [2024-11-20 13:30:10.184883] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:10.926 passed 00:10:10.926 Test: blockdev write read 8 blocks ...passed 00:10:10.926 Test: blockdev write read size > 128k ...passed 00:10:10.926 Test: blockdev write read invalid size ...passed 00:10:10.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.926 Test: blockdev write read max offset ...passed 00:10:10.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.926 Test: blockdev writev readv 8 blocks ...passed 00:10:10.926 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.926 Test: blockdev writev readv block ...passed 00:10:10.926 Test: blockdev writev readv size > 128k ...passed 00:10:10.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.926 Test: blockdev comparev and writev ...[2024-11-20 13:30:10.191909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf434000 len:0x1000 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme passthru rw ...[2024-11-20 13:30:10.192071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme passthru vendor specific ...passed 00:10:10.926 Test: blockdev nvme admin passthru ...[2024-11-20 13:30:10.192807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:10.926 [2024-11-20 13:30:10.192836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:10.926 passed 00:10:10.926 Test: blockdev copy ...passed 00:10:10.926 Suite: bdevio tests on: Nvme0n1 00:10:10.926 Test: blockdev write read block ...passed 00:10:10.926 Test: blockdev write zeroes read block ...passed 00:10:10.926 Test: blockdev write zeroes read no split ...passed 00:10:10.926 Test: blockdev write zeroes read split ...passed 00:10:10.926 Test: blockdev write zeroes read split partial ...passed 00:10:10.926 Test: blockdev reset ...[2024-11-20 13:30:10.251956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:10.926 [2024-11-20 13:30:10.254840] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:10.926 passed 00:10:10.926 Test: blockdev write read 8 blocks ...passed 00:10:10.926 Test: blockdev write read size > 128k ...passed 00:10:10.926 Test: blockdev write read invalid size ...passed 00:10:10.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.926 Test: blockdev write read max offset ...passed 00:10:10.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.926 Test: blockdev writev readv 8 blocks ...passed 00:10:10.926 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.926 Test: blockdev writev readv block ...passed 00:10:10.926 Test: blockdev writev readv size > 128k ...passed 00:10:10.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.926 Test: blockdev comparev and writev ...[2024-11-20 13:30:10.261016] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:10.926 separate metadata which is not supported yet. 00:10:10.926 passed 00:10:10.926 Test: blockdev nvme passthru rw ...passed 00:10:10.926 Test: blockdev nvme passthru vendor specific ...passed 00:10:10.926 Test: blockdev nvme admin passthru ...[2024-11-20 13:30:10.261554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:10.926 [2024-11-20 13:30:10.261592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:10.926 passed 00:10:10.926 Test: blockdev copy ...passed 00:10:10.926 00:10:10.926 Run Summary: Type Total Ran Passed Failed Inactive 00:10:10.926 suites 6 6 n/a 0 0 00:10:10.926 tests 138 138 138 0 0 00:10:10.926 asserts 893 893 893 0 n/a 00:10:10.926 00:10:10.926 Elapsed time = 1.644 seconds 00:10:10.926 0 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60167 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60167 ']' 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60167 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60167 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60167' 00:10:10.926 killing process with pid 60167 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60167 00:10:10.926 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60167 00:10:11.861 13:30:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:11.861 00:10:11.861 real 0m2.380s 00:10:11.861 user 0m5.891s 00:10:11.861 sys 0m0.271s 00:10:11.861 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.861 13:30:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:11.861 ************************************ 00:10:11.861 END TEST bdev_bounds 00:10:11.861 ************************************ 00:10:11.861 13:30:11 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:11.861 13:30:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.861 13:30:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.861 13:30:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.861 ************************************ 00:10:11.861 START TEST bdev_nbd 00:10:11.861 ************************************ 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:11.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60232 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60232 /var/tmp/spdk-nbd.sock 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60232 ']' 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:11.861 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:11.861 [2024-11-20 13:30:11.088228] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:11.861 [2024-11-20 13:30:11.088347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.861 [2024-11-20 13:30:11.250566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.167 [2024-11-20 13:30:11.351430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:12.733 13:30:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:12.990 1+0 records in 00:10:12.990 1+0 records out 00:10:12.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375287 s, 10.9 MB/s 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:12.990 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.249 1+0 records in 00:10:13.249 1+0 records out 00:10:13.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374885 s, 10.9 MB/s 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.249 1+0 records in 00:10:13.249 1+0 records out 00:10:13.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450202 s, 9.1 MB/s 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.249 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.508 1+0 records in 00:10:13.508 1+0 records out 00:10:13.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342225 s, 12.0 MB/s 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.508 13:30:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.766 1+0 records in 00:10:13.766 1+0 records out 00:10:13.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364962 s, 11.2 MB/s 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.766 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:14.023 1+0 records in 00:10:14.023 1+0 records out 00:10:14.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725865 s, 5.6 MB/s 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:14.023 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd0", 00:10:14.282 "bdev_name": "Nvme0n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd1", 00:10:14.282 "bdev_name": "Nvme1n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd2", 00:10:14.282 "bdev_name": "Nvme2n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd3", 00:10:14.282 "bdev_name": "Nvme2n2" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd4", 00:10:14.282 "bdev_name": "Nvme2n3" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd5", 00:10:14.282 "bdev_name": "Nvme3n1" 00:10:14.282 } 00:10:14.282 ]' 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd0", 00:10:14.282 "bdev_name": "Nvme0n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd1", 00:10:14.282 "bdev_name": "Nvme1n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd2", 00:10:14.282 "bdev_name": "Nvme2n1" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd3", 00:10:14.282 "bdev_name": "Nvme2n2" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd4", 00:10:14.282 "bdev_name": "Nvme2n3" 00:10:14.282 }, 00:10:14.282 { 00:10:14.282 "nbd_device": "/dev/nbd5", 00:10:14.282 "bdev_name": "Nvme3n1" 00:10:14.282 } 00:10:14.282 ]' 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:14.282 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.540 13:30:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.797 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.054 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:15.055 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:15.055 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.055 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.055 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.312 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.570 13:30:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:15.878 /dev/nbd0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:15.878 1+0 records in 00:10:15.878 1+0 records out 00:10:15.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460591 s, 8.9 MB/s 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:15.878 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:16.136 /dev/nbd1 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:16.137 1+0 records in 00:10:16.137 1+0 records out 00:10:16.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630608 s, 6.5 MB/s 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:16.137 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:16.395 /dev/nbd10 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:16.395 1+0 records in 00:10:16.395 1+0 records out 00:10:16.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734137 s, 5.6 MB/s 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:16.395 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:16.653 /dev/nbd11 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:16.653 1+0 records in 00:10:16.653 1+0 records out 00:10:16.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369613 s, 11.1 MB/s 00:10:16.653 13:30:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:16.653 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:16.911 /dev/nbd12 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:16.911 1+0 records in 00:10:16.911 1+0 records out 00:10:16.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419016 s, 9.8 MB/s 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:16.911 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:17.169 /dev/nbd13 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.169 1+0 records in 00:10:17.169 1+0 records out 00:10:17.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853612 s, 4.8 MB/s 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.169 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:17.427 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:17.427 { 00:10:17.427 "nbd_device": "/dev/nbd0", 00:10:17.427 "bdev_name": "Nvme0n1" 00:10:17.427 }, 00:10:17.427 { 00:10:17.427 "nbd_device": "/dev/nbd1", 00:10:17.427 "bdev_name": "Nvme1n1" 00:10:17.427 }, 00:10:17.427 { 00:10:17.427 "nbd_device": "/dev/nbd10", 00:10:17.427 "bdev_name": "Nvme2n1" 00:10:17.427 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd11", 00:10:17.428 "bdev_name": "Nvme2n2" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd12", 00:10:17.428 "bdev_name": "Nvme2n3" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd13", 00:10:17.428 "bdev_name": "Nvme3n1" 00:10:17.428 } 00:10:17.428 ]' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd0", 00:10:17.428 "bdev_name": "Nvme0n1" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd1", 00:10:17.428 "bdev_name": "Nvme1n1" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd10", 00:10:17.428 "bdev_name": "Nvme2n1" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd11", 00:10:17.428 "bdev_name": "Nvme2n2" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd12", 00:10:17.428 "bdev_name": "Nvme2n3" 00:10:17.428 }, 00:10:17.428 { 00:10:17.428 "nbd_device": "/dev/nbd13", 00:10:17.428 "bdev_name": "Nvme3n1" 00:10:17.428 } 00:10:17.428 ]' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:17.428 /dev/nbd1 00:10:17.428 /dev/nbd10 00:10:17.428 /dev/nbd11 00:10:17.428 /dev/nbd12 00:10:17.428 /dev/nbd13' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:17.428 /dev/nbd1 00:10:17.428 /dev/nbd10 00:10:17.428 /dev/nbd11 00:10:17.428 /dev/nbd12 00:10:17.428 /dev/nbd13' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:17.428 256+0 records in 00:10:17.428 256+0 records out 00:10:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00987078 s, 106 MB/s 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:17.428 256+0 records in 00:10:17.428 256+0 records out 00:10:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10958 s, 9.6 MB/s 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.428 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:17.686 256+0 records in 00:10:17.686 256+0 records out 00:10:17.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.081879 s, 12.8 MB/s 00:10:17.686 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.686 13:30:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:17.686 256+0 records in 00:10:17.686 256+0 records out 00:10:17.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178632 s, 5.9 MB/s 00:10:17.686 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.686 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:17.946 256+0 records in 00:10:17.946 256+0 records out 00:10:17.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0795266 s, 13.2 MB/s 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:17.946 256+0 records in 00:10:17.946 256+0 records out 00:10:17.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0803682 s, 13.0 MB/s 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:17.946 256+0 records in 00:10:17.946 256+0 records out 00:10:17.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0773929 s, 13.5 MB/s 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:17.946 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:18.205 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:18.462 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:18.463 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:18.722 13:30:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:18.981 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.238 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:19.495 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:19.496 13:30:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:19.753 malloc_lvol_verify 00:10:19.753 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:20.010 8560b838-5408-4e97-aac3-557701563a42 00:10:20.010 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:20.268 841de6e3-66a2-4487-b5a3-7f731546c82a 00:10:20.268 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:20.268 /dev/nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:20.525 mke2fs 1.47.0 (5-Feb-2023) 00:10:20.525 Discarding device blocks: 0/4096 done 00:10:20.525 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:20.525 00:10:20.525 Allocating group tables: 0/1 done 00:10:20.525 Writing inode tables: 0/1 done 00:10:20.525 Creating journal (1024 blocks): done 00:10:20.525 Writing superblocks and filesystem accounting information: 0/1 done 00:10:20.525 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60232 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60232 ']' 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60232 00:10:20.525 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60232 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.783 killing process with pid 60232 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60232' 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60232 00:10:20.783 13:30:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60232 00:10:21.348 13:30:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:21.348 00:10:21.348 real 0m9.722s 00:10:21.348 user 0m13.856s 00:10:21.348 sys 0m2.996s 00:10:21.348 13:30:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.348 13:30:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:21.348 ************************************ 00:10:21.348 END TEST bdev_nbd 00:10:21.348 ************************************ 00:10:21.606 13:30:20 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:21.606 skipping fio tests on NVMe due to multi-ns failures. 00:10:21.606 13:30:20 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:21.606 13:30:20 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:21.606 13:30:20 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:21.606 13:30:20 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:21.606 13:30:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:21.606 13:30:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.606 13:30:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.606 ************************************ 00:10:21.606 START TEST bdev_verify 00:10:21.606 ************************************ 00:10:21.606 13:30:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:21.606 [2024-11-20 13:30:20.846562] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:21.606 [2024-11-20 13:30:20.846712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60604 ] 00:10:21.606 [2024-11-20 13:30:21.007184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:21.864 [2024-11-20 13:30:21.106677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.864 [2024-11-20 13:30:21.106798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.428 Running I/O for 5 seconds... 00:10:24.737 21247.00 IOPS, 83.00 MiB/s [2024-11-20T13:30:25.096Z] 22880.00 IOPS, 89.38 MiB/s [2024-11-20T13:30:26.036Z] 23701.67 IOPS, 92.58 MiB/s [2024-11-20T13:30:26.972Z] 23647.75 IOPS, 92.37 MiB/s [2024-11-20T13:30:26.972Z] 24000.00 IOPS, 93.75 MiB/s 00:10:27.545 Latency(us) 00:10:27.545 [2024-11-20T13:30:26.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.545 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0xbd0bd 00:10:27.545 Nvme0n1 : 5.06 1974.64 7.71 0.00 0.00 64620.79 13409.67 77030.01 00:10:27.545 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:27.545 Nvme0n1 : 5.05 1964.12 7.67 0.00 0.00 64935.51 5999.06 77433.30 00:10:27.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0xa0000 00:10:27.545 Nvme1n1 : 5.06 1974.08 7.71 0.00 0.00 64502.22 14619.57 66544.25 00:10:27.545 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0xa0000 length 0xa0000 00:10:27.545 Nvme1n1 : 5.06 1973.47 7.71 0.00 0.00 64540.96 13107.20 66140.95 00:10:27.545 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0x80000 00:10:27.545 Nvme2n1 : 5.06 1973.43 7.71 0.00 0.00 64399.19 14619.57 62511.26 00:10:27.545 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x80000 length 0x80000 00:10:27.545 Nvme2n1 : 5.06 1972.02 7.70 0.00 0.00 64406.52 14216.27 63317.86 00:10:27.545 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0x80000 00:10:27.545 Nvme2n2 : 5.06 1971.96 7.70 0.00 0.00 64288.40 14518.74 59688.17 00:10:27.545 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x80000 length 0x80000 00:10:27.545 Nvme2n2 : 5.07 1970.71 7.70 0.00 0.00 64278.98 13208.02 60898.07 00:10:27.545 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0x80000 00:10:27.545 Nvme2n3 : 5.08 1979.06 7.73 0.00 0.00 63924.86 5016.02 60494.77 00:10:27.545 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x80000 length 0x80000 00:10:27.545 Nvme2n3 : 5.07 1970.17 7.70 0.00 0.00 64141.62 12401.43 61704.66 00:10:27.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x0 length 0x20000 00:10:27.545 Nvme3n1 : 5.09 1987.81 7.76 0.00 0.00 63568.27 7360.20 65334.35 00:10:27.545 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:27.545 Verification LBA range: start 0x20000 length 0x20000 00:10:27.545 Nvme3n1 : 5.09 1987.09 7.76 0.00 0.00 63541.41 7158.55 65334.35 00:10:27.545 [2024-11-20T13:30:26.972Z] =================================================================================================================== 00:10:27.545 [2024-11-20T13:30:26.972Z] Total : 23698.55 92.57 0.00 0.00 64260.35 5016.02 77433.30 00:10:29.445 00:10:29.445 real 0m7.616s 00:10:29.445 user 0m13.758s 00:10:29.445 sys 0m0.210s 00:10:29.445 ************************************ 00:10:29.445 END TEST bdev_verify 00:10:29.445 ************************************ 00:10:29.445 13:30:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.445 13:30:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 13:30:28 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:29.445 13:30:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:29.445 13:30:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.445 13:30:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.445 ************************************ 00:10:29.445 START TEST bdev_verify_big_io 00:10:29.445 ************************************ 00:10:29.445 13:30:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:29.445 [2024-11-20 13:30:28.501857] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:29.445 [2024-11-20 13:30:28.502009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:10:29.445 [2024-11-20 13:30:28.663469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.445 [2024-11-20 13:30:28.765629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.445 [2024-11-20 13:30:28.765727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.009 Running I/O for 5 seconds... 00:10:33.103 64.00 IOPS, 4.00 MiB/s [2024-11-20T13:30:33.902Z] 776.50 IOPS, 48.53 MiB/s [2024-11-20T13:30:35.273Z] 1161.33 IOPS, 72.58 MiB/s [2024-11-20T13:30:35.840Z] 1453.50 IOPS, 90.84 MiB/s 00:10:36.413 Latency(us) 00:10:36.413 [2024-11-20T13:30:35.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.413 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0xbd0b 00:10:36.413 Nvme0n1 : 5.75 132.42 8.28 0.00 0.00 934585.11 17745.13 1238932.87 00:10:36.413 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:36.413 Nvme0n1 : 5.72 111.84 6.99 0.00 0.00 1101798.48 18249.26 1251838.42 00:10:36.413 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0xa000 00:10:36.413 Nvme1n1 : 5.76 129.44 8.09 0.00 0.00 909119.99 76626.71 1032444.06 00:10:36.413 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0xa000 length 0xa000 00:10:36.413 Nvme1n1 : 5.73 111.78 6.99 0.00 0.00 1061059.98 117763.15 1038896.84 00:10:36.413 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0x8000 00:10:36.413 Nvme2n1 : 5.76 133.39 8.34 0.00 0.00 858885.51 111310.38 1051802.39 00:10:36.413 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x8000 length 0x8000 00:10:36.413 Nvme2n1 : 5.88 112.41 7.03 0.00 0.00 1006363.28 151640.22 1064707.94 00:10:36.413 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0x8000 00:10:36.413 Nvme2n2 : 5.89 141.27 8.83 0.00 0.00 784930.90 38918.30 1096971.82 00:10:36.413 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x8000 length 0x8000 00:10:36.413 Nvme2n2 : 6.01 124.51 7.78 0.00 0.00 892238.00 36700.16 1096971.82 00:10:36.413 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0x8000 00:10:36.413 Nvme2n3 : 5.96 150.38 9.40 0.00 0.00 712234.65 37910.06 1109877.37 00:10:36.413 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x8000 length 0x8000 00:10:36.413 Nvme2n3 : 6.06 127.64 7.98 0.00 0.00 835325.55 49202.41 1096971.82 00:10:36.413 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x0 length 0x2000 00:10:36.413 Nvme3n1 : 6.09 178.63 11.16 0.00 0.00 582212.89 107.91 1122782.92 00:10:36.413 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:36.413 Verification LBA range: start 0x2000 length 0x2000 00:10:36.413 Nvme3n1 : 6.10 146.92 9.18 0.00 0.00 705899.95 560.84 1122782.92 00:10:36.413 [2024-11-20T13:30:35.840Z] =================================================================================================================== 00:10:36.413 [2024-11-20T13:30:35.840Z] Total : 1600.64 100.04 0.00 0.00 843345.04 107.91 1251838.42 00:10:37.784 00:10:37.784 real 0m8.589s 00:10:37.784 user 0m15.955s 00:10:37.784 sys 0m0.227s 00:10:37.784 13:30:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.784 13:30:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.784 ************************************ 00:10:37.784 END TEST bdev_verify_big_io 00:10:37.784 ************************************ 00:10:37.785 13:30:37 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.785 13:30:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:37.785 13:30:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.785 13:30:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:37.785 ************************************ 00:10:37.785 START TEST bdev_write_zeroes 00:10:37.785 ************************************ 00:10:37.785 13:30:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:37.785 [2024-11-20 13:30:37.146686] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:37.785 [2024-11-20 13:30:37.146829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:10:38.043 [2024-11-20 13:30:37.308726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.043 [2024-11-20 13:30:37.430337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.607 Running I/O for 1 seconds... 00:10:39.979 28431.00 IOPS, 111.06 MiB/s 00:10:39.979 Latency(us) 00:10:39.979 [2024-11-20T13:30:39.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.979 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme0n1 : 1.02 4479.56 17.50 0.00 0.00 28521.19 4637.93 258111.02 00:10:39.979 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme1n1 : 1.02 4899.41 19.14 0.00 0.00 26028.79 8469.27 253271.43 00:10:39.979 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme2n1 : 1.02 4893.87 19.12 0.00 0.00 25968.81 8570.09 254884.63 00:10:39.979 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme2n2 : 1.02 4888.36 19.10 0.00 0.00 25945.48 8570.09 254884.63 00:10:39.979 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme2n3 : 1.02 4882.81 19.07 0.00 0.00 25921.41 6956.90 254884.63 00:10:39.979 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:39.979 Nvme3n1 : 1.02 4939.78 19.30 0.00 0.00 25559.68 6125.10 250045.05 00:10:39.979 [2024-11-20T13:30:39.406Z] =================================================================================================================== 00:10:39.979 [2024-11-20T13:30:39.406Z] Total : 28983.78 113.22 0.00 0.00 26290.42 4637.93 258111.02 00:10:40.547 00:10:40.547 real 0m2.715s 00:10:40.547 user 0m2.407s 00:10:40.547 sys 0m0.192s 00:10:40.547 13:30:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.547 13:30:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:40.547 ************************************ 00:10:40.547 END TEST bdev_write_zeroes 00:10:40.547 ************************************ 00:10:40.547 13:30:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:40.547 13:30:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:40.547 13:30:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.547 13:30:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.547 ************************************ 00:10:40.547 START TEST bdev_json_nonenclosed 00:10:40.547 ************************************ 00:10:40.547 13:30:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:40.547 [2024-11-20 13:30:39.895762] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:40.547 [2024-11-20 13:30:39.895941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:10:40.805 [2024-11-20 13:30:40.064340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.805 [2024-11-20 13:30:40.165301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.805 [2024-11-20 13:30:40.165381] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:40.805 [2024-11-20 13:30:40.165398] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:40.805 [2024-11-20 13:30:40.165407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:41.063 00:10:41.063 real 0m0.534s 00:10:41.063 user 0m0.315s 00:10:41.063 sys 0m0.115s 00:10:41.063 13:30:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.063 13:30:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:41.063 ************************************ 00:10:41.063 END TEST bdev_json_nonenclosed 00:10:41.063 ************************************ 00:10:41.063 13:30:40 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:41.063 13:30:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:41.063 13:30:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.063 13:30:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.063 ************************************ 00:10:41.063 START TEST bdev_json_nonarray 00:10:41.063 ************************************ 00:10:41.063 13:30:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:41.063 [2024-11-20 13:30:40.433178] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:41.064 [2024-11-20 13:30:40.433277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60893 ] 00:10:41.321 [2024-11-20 13:30:40.579628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.321 [2024-11-20 13:30:40.678482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.321 [2024-11-20 13:30:40.678575] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:41.322 [2024-11-20 13:30:40.678592] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:41.322 [2024-11-20 13:30:40.678602] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:41.580 00:10:41.580 real 0m0.476s 00:10:41.580 user 0m0.290s 00:10:41.580 sys 0m0.083s 00:10:41.580 13:30:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.580 13:30:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 ************************************ 00:10:41.580 END TEST bdev_json_nonarray 00:10:41.580 ************************************ 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:41.580 13:30:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:41.580 ************************************ 00:10:41.580 END TEST blockdev_nvme 00:10:41.580 ************************************ 00:10:41.580 00:10:41.580 real 0m37.164s 00:10:41.580 user 0m57.122s 00:10:41.580 sys 0m4.984s 00:10:41.580 13:30:40 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.580 13:30:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 13:30:40 -- spdk/autotest.sh@209 -- # uname -s 00:10:41.580 13:30:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:41.580 13:30:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:41.580 13:30:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.580 13:30:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.580 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 ************************************ 00:10:41.580 START TEST blockdev_nvme_gpt 00:10:41.580 ************************************ 00:10:41.580 13:30:40 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:41.580 * Looking for test storage... 00:10:41.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:41.580 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.580 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.580 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.837 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.837 13:30:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:41.837 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.837 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.837 --rc genhtml_branch_coverage=1 00:10:41.837 --rc genhtml_function_coverage=1 00:10:41.837 --rc genhtml_legend=1 00:10:41.837 --rc geninfo_all_blocks=1 00:10:41.837 --rc geninfo_unexecuted_blocks=1 00:10:41.837 00:10:41.837 ' 00:10:41.837 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.837 --rc genhtml_branch_coverage=1 00:10:41.837 --rc genhtml_function_coverage=1 00:10:41.837 --rc genhtml_legend=1 00:10:41.837 --rc geninfo_all_blocks=1 00:10:41.837 --rc geninfo_unexecuted_blocks=1 00:10:41.837 00:10:41.837 ' 00:10:41.837 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.838 --rc genhtml_branch_coverage=1 00:10:41.838 --rc genhtml_function_coverage=1 00:10:41.838 --rc genhtml_legend=1 00:10:41.838 --rc geninfo_all_blocks=1 00:10:41.838 --rc geninfo_unexecuted_blocks=1 00:10:41.838 00:10:41.838 ' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.838 --rc genhtml_branch_coverage=1 00:10:41.838 --rc genhtml_function_coverage=1 00:10:41.838 --rc genhtml_legend=1 00:10:41.838 --rc geninfo_all_blocks=1 00:10:41.838 --rc geninfo_unexecuted_blocks=1 00:10:41.838 00:10:41.838 ' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60971 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60971 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60971 ']' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.838 13:30:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:41.838 13:30:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:41.838 [2024-11-20 13:30:41.159256] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:41.838 [2024-11-20 13:30:41.159379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60971 ] 00:10:42.096 [2024-11-20 13:30:41.319079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.096 [2024-11-20 13:30:41.422357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.661 13:30:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.661 13:30:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:42.661 13:30:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:42.661 13:30:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:42.661 13:30:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.178 Waiting for block devices as requested 00:10:43.178 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.178 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.807 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:48.807 BYT; 00:10:48.807 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:48.807 BYT; 00:10:48.807 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:48.807 13:30:47 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:48.807 13:30:47 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:49.741 The operation has completed successfully. 00:10:49.741 13:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:50.674 The operation has completed successfully. 00:10:50.674 13:30:49 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.503 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.503 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.503 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.503 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:51.503 13:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.503 13:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.503 [] 00:10:51.503 13:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:51.503 13:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:51.503 13:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.503 13:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.074 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:52.074 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:52.075 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "59f203de-6a78-4dfc-8222-64af2490203a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "59f203de-6a78-4dfc-8222-64af2490203a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b212eb43-2390-416c-8aec-6c39b34f9754"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b212eb43-2390-416c-8aec-6c39b34f9754",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1d6a851a-49dc-4eae-9b4b-f8cdb3a95bcd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d6a851a-49dc-4eae-9b4b-f8cdb3a95bcd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cb2ebe2e-3cfd-4dcc-8c44-391835b3b9c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb2ebe2e-3cfd-4dcc-8c44-391835b3b9c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e85d05bf-2998-46bd-8bfb-f15d3a7443c5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e85d05bf-2998-46bd-8bfb-f15d3a7443c5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:52.075 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:52.075 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:52.075 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:52.075 13:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60971 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60971 ']' 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60971 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60971 00:10:52.075 killing process with pid 60971 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60971' 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60971 00:10:52.075 13:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60971 00:10:53.463 13:30:52 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:53.463 13:30:52 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:53.463 13:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:53.463 13:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.463 13:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:53.722 ************************************ 00:10:53.722 START TEST bdev_hello_world 00:10:53.722 ************************************ 00:10:53.722 13:30:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:53.722 [2024-11-20 13:30:52.959264] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:53.723 [2024-11-20 13:30:52.959391] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61590 ] 00:10:53.723 [2024-11-20 13:30:53.120400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.983 [2024-11-20 13:30:53.228342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.556 [2024-11-20 13:30:53.834351] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:54.556 [2024-11-20 13:30:53.834407] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:54.557 [2024-11-20 13:30:53.834430] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:54.557 [2024-11-20 13:30:53.836850] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:54.557 [2024-11-20 13:30:53.837676] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:54.557 [2024-11-20 13:30:53.837702] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:54.557 [2024-11-20 13:30:53.838327] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:54.557 00:10:54.557 [2024-11-20 13:30:53.838354] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:55.499 00:10:55.499 real 0m1.664s 00:10:55.499 user 0m1.377s 00:10:55.499 sys 0m0.178s 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.499 ************************************ 00:10:55.499 END TEST bdev_hello_world 00:10:55.499 ************************************ 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:55.499 13:30:54 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:55.499 13:30:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.499 13:30:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.499 13:30:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:55.499 ************************************ 00:10:55.499 START TEST bdev_bounds 00:10:55.499 ************************************ 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61626 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:55.499 Process bdevio pid: 61626 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61626' 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61626 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61626 ']' 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.499 13:30:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:55.499 [2024-11-20 13:30:54.684112] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:55.499 [2024-11-20 13:30:54.684238] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61626 ] 00:10:55.499 [2024-11-20 13:30:54.842153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.760 [2024-11-20 13:30:54.950869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.760 [2024-11-20 13:30:54.951223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.760 [2024-11-20 13:30:54.951400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.331 13:30:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.331 13:30:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:56.331 13:30:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:56.331 I/O targets: 00:10:56.331 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:56.331 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:56.331 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:56.331 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:56.331 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:56.331 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:56.331 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:56.331 00:10:56.331 00:10:56.331 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.331 http://cunit.sourceforge.net/ 00:10:56.331 00:10:56.331 00:10:56.331 Suite: bdevio tests on: Nvme3n1 00:10:56.331 Test: blockdev write read block ...passed 00:10:56.331 Test: blockdev write zeroes read block ...passed 00:10:56.331 Test: blockdev write zeroes read no split ...passed 00:10:56.331 Test: blockdev write zeroes read split ...passed 00:10:56.331 Test: blockdev write zeroes read split partial ...passed 00:10:56.331 Test: blockdev reset ...[2024-11-20 13:30:55.729580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:56.331 [2024-11-20 13:30:55.732853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:56.331 passed 00:10:56.331 Test: blockdev write read 8 blocks ...passed 00:10:56.331 Test: blockdev write read size > 128k ...passed 00:10:56.331 Test: blockdev write read invalid size ...passed 00:10:56.331 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.331 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.331 Test: blockdev write read max offset ...passed 00:10:56.331 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.331 Test: blockdev writev readv 8 blocks ...passed 00:10:56.331 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.331 Test: blockdev writev readv block ...passed 00:10:56.331 Test: blockdev writev readv size > 128k ...passed 00:10:56.331 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.331 Test: blockdev comparev and writev ...[2024-11-20 13:30:55.751289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bcc04000 len:0x1000 00:10:56.331 [2024-11-20 13:30:55.751338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.331 passed 00:10:56.331 Test: blockdev nvme passthru rw ...passed 00:10:56.331 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:55.754168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:56.331 [2024-11-20 13:30:55.754216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:56.331 passed 00:10:56.592 Test: blockdev nvme admin passthru ...passed 00:10:56.592 Test: blockdev copy ...passed 00:10:56.592 Suite: bdevio tests on: Nvme2n3 00:10:56.592 Test: blockdev write read block ...passed 00:10:56.592 Test: blockdev write zeroes read block ...passed 00:10:56.592 Test: blockdev write zeroes read no split ...passed 00:10:56.592 Test: blockdev write zeroes read split ...passed 00:10:56.592 Test: blockdev write zeroes read split partial ...passed 00:10:56.592 Test: blockdev reset ...[2024-11-20 13:30:55.814254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:56.592 [2024-11-20 13:30:55.818662] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:56.592 passed 00:10:56.592 Test: blockdev write read 8 blocks ...passed 00:10:56.592 Test: blockdev write read size > 128k ...passed 00:10:56.592 Test: blockdev write read invalid size ...passed 00:10:56.592 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.592 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.592 Test: blockdev write read max offset ...passed 00:10:56.592 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.592 Test: blockdev writev readv 8 blocks ...passed 00:10:56.592 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.592 Test: blockdev writev readv block ...passed 00:10:56.592 Test: blockdev writev readv size > 128k ...passed 00:10:56.592 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.592 Test: blockdev comparev and writev ...[2024-11-20 13:30:55.839522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bcc02000 len:0x1000 00:10:56.592 [2024-11-20 13:30:55.839580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.592 passed 00:10:56.592 Test: blockdev nvme passthru rw ...passed 00:10:56.592 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:55.842344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:56.592 [2024-11-20 13:30:55.842382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:56.592 passed 00:10:56.592 Test: blockdev nvme admin passthru ...passed 00:10:56.592 Test: blockdev copy ...passed 00:10:56.592 Suite: bdevio tests on: Nvme2n2 00:10:56.592 Test: blockdev write read block ...passed 00:10:56.592 Test: blockdev write zeroes read block ...passed 00:10:56.593 Test: blockdev write zeroes read no split ...passed 00:10:56.593 Test: blockdev write zeroes read split ...passed 00:10:56.593 Test: blockdev write zeroes read split partial ...passed 00:10:56.593 Test: blockdev reset ...[2024-11-20 13:30:55.898499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:56.593 [2024-11-20 13:30:55.903385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:56.593 passed 00:10:56.593 Test: blockdev write read 8 blocks ...passed 00:10:56.593 Test: blockdev write read size > 128k ...passed 00:10:56.593 Test: blockdev write read invalid size ...passed 00:10:56.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.593 Test: blockdev write read max offset ...passed 00:10:56.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.593 Test: blockdev writev readv 8 blocks ...passed 00:10:56.593 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.593 Test: blockdev writev readv block ...passed 00:10:56.593 Test: blockdev writev readv size > 128k ...passed 00:10:56.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.593 Test: blockdev comparev and writev ...[2024-11-20 13:30:55.921840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1238000 len:0x1000 00:10:56.593 [2024-11-20 13:30:55.921890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.593 passed 00:10:56.593 Test: blockdev nvme passthru rw ...passed 00:10:56.593 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:55.924058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:56.593 [2024-11-20 13:30:55.924087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:56.593 passed 00:10:56.593 Test: blockdev nvme admin passthru ...passed 00:10:56.593 Test: blockdev copy ...passed 00:10:56.593 Suite: bdevio tests on: Nvme2n1 00:10:56.593 Test: blockdev write read block ...passed 00:10:56.593 Test: blockdev write zeroes read block ...passed 00:10:56.593 Test: blockdev write zeroes read no split ...passed 00:10:56.593 Test: blockdev write zeroes read split ...passed 00:10:56.593 Test: blockdev write zeroes read split partial ...passed 00:10:56.593 Test: blockdev reset ...[2024-11-20 13:30:55.981168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:56.593 [2024-11-20 13:30:55.984873] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:56.593 passed 00:10:56.593 Test: blockdev write read 8 blocks ...passed 00:10:56.593 Test: blockdev write read size > 128k ...passed 00:10:56.593 Test: blockdev write read invalid size ...passed 00:10:56.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.593 Test: blockdev write read max offset ...passed 00:10:56.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.593 Test: blockdev writev readv 8 blocks ...passed 00:10:56.593 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.593 Test: blockdev writev readv block ...passed 00:10:56.593 Test: blockdev writev readv size > 128k ...passed 00:10:56.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.593 Test: blockdev comparev and writev ...[2024-11-20 13:30:56.001397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1234000 len:0x1000 00:10:56.593 [2024-11-20 13:30:56.001495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.593 passed 00:10:56.593 Test: blockdev nvme passthru rw ...passed 00:10:56.593 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:56.003139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:56.593 [2024-11-20 13:30:56.003195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:56.593 passed 00:10:56.593 Test: blockdev nvme admin passthru ...passed 00:10:56.593 Test: blockdev copy ...passed 00:10:56.593 Suite: bdevio tests on: Nvme1n1p2 00:10:56.593 Test: blockdev write read block ...passed 00:10:56.855 Test: blockdev write zeroes read block ...passed 00:10:56.855 Test: blockdev write zeroes read no split ...passed 00:10:56.855 Test: blockdev write zeroes read split ...passed 00:10:56.855 Test: blockdev write zeroes read split partial ...passed 00:10:56.855 Test: blockdev reset ...[2024-11-20 13:30:56.068041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:56.855 [2024-11-20 13:30:56.070773] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:56.855 passed 00:10:56.855 Test: blockdev write read 8 blocks ...passed 00:10:56.855 Test: blockdev write read size > 128k ...passed 00:10:56.855 Test: blockdev write read invalid size ...passed 00:10:56.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.855 Test: blockdev write read max offset ...passed 00:10:56.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.855 Test: blockdev writev readv 8 blocks ...passed 00:10:56.855 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.855 Test: blockdev writev readv block ...passed 00:10:56.855 Test: blockdev writev readv size > 128k ...passed 00:10:56.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.855 Test: blockdev comparev and writev ...[2024-11-20 13:30:56.089709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d1230000 len:0x1000 00:10:56.855 [2024-11-20 13:30:56.089756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.855 passed 00:10:56.855 Test: blockdev nvme passthru rw ...passed 00:10:56.855 Test: blockdev nvme passthru vendor specific ...passed 00:10:56.855 Test: blockdev nvme admin passthru ...passed 00:10:56.855 Test: blockdev copy ...passed 00:10:56.855 Suite: bdevio tests on: Nvme1n1p1 00:10:56.855 Test: blockdev write read block ...passed 00:10:56.855 Test: blockdev write zeroes read block ...passed 00:10:56.855 Test: blockdev write zeroes read no split ...passed 00:10:56.855 Test: blockdev write zeroes read split ...passed 00:10:56.855 Test: blockdev write zeroes read split partial ...passed 00:10:56.855 Test: blockdev reset ...[2024-11-20 13:30:56.146477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:56.855 [2024-11-20 13:30:56.152034] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:56.855 passed 00:10:56.855 Test: blockdev write read 8 blocks ...passed 00:10:56.855 Test: blockdev write read size > 128k ...passed 00:10:56.855 Test: blockdev write read invalid size ...passed 00:10:56.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.855 Test: blockdev write read max offset ...passed 00:10:56.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.855 Test: blockdev writev readv 8 blocks ...passed 00:10:56.855 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.855 Test: blockdev writev readv block ...passed 00:10:56.855 Test: blockdev writev readv size > 128k ...passed 00:10:56.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.855 Test: blockdev comparev and writev ...[2024-11-20 13:30:56.173648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bd60e000 len:0x1000 00:10:56.855 [2024-11-20 13:30:56.173701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:56.855 passed 00:10:56.855 Test: blockdev nvme passthru rw ...passed 00:10:56.855 Test: blockdev nvme passthru vendor specific ...passed 00:10:56.855 Test: blockdev nvme admin passthru ...passed 00:10:56.855 Test: blockdev copy ...passed 00:10:56.855 Suite: bdevio tests on: Nvme0n1 00:10:56.855 Test: blockdev write read block ...passed 00:10:56.855 Test: blockdev write zeroes read block ...passed 00:10:56.855 Test: blockdev write zeroes read no split ...passed 00:10:56.855 Test: blockdev write zeroes read split ...passed 00:10:56.855 Test: blockdev write zeroes read split partial ...passed 00:10:56.855 Test: blockdev reset ...[2024-11-20 13:30:56.228072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:56.855 [2024-11-20 13:30:56.232194] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:56.855 passed 00:10:56.855 Test: blockdev write read 8 blocks ...passed 00:10:56.855 Test: blockdev write read size > 128k ...passed 00:10:56.855 Test: blockdev write read invalid size ...passed 00:10:56.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.855 Test: blockdev write read max offset ...passed 00:10:56.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.855 Test: blockdev writev readv 8 blocks ...passed 00:10:56.855 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.855 Test: blockdev writev readv block ...passed 00:10:56.855 Test: blockdev writev readv size > 128k ...passed 00:10:56.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.855 Test: blockdev comparev and writev ...passed 00:10:56.855 Test: blockdev nvme passthru rw ...[2024-11-20 13:30:56.249917] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:56.855 separate metadata which is not supported yet. 00:10:56.855 passed 00:10:56.855 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:30:56.252200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:56.855 [2024-11-20 13:30:56.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:56.855 passed 00:10:56.855 Test: blockdev nvme admin passthru ...passed 00:10:56.855 Test: blockdev copy ...passed 00:10:56.855 00:10:56.855 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.855 suites 7 7 n/a 0 0 00:10:56.855 tests 161 161 161 0 0 00:10:56.855 asserts 1025 1025 1025 0 n/a 00:10:56.855 00:10:56.855 Elapsed time = 1.459 seconds 00:10:56.855 0 00:10:56.855 13:30:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61626 00:10:56.855 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61626 ']' 00:10:56.855 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61626 00:10:56.855 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61626 00:10:57.117 killing process with pid 61626 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61626' 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61626 00:10:57.117 13:30:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61626 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:57.691 00:10:57.691 real 0m2.393s 00:10:57.691 user 0m6.118s 00:10:57.691 sys 0m0.302s 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 ************************************ 00:10:57.691 END TEST bdev_bounds 00:10:57.691 ************************************ 00:10:57.691 13:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:57.691 13:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.691 13:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.691 13:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 ************************************ 00:10:57.691 START TEST bdev_nbd 00:10:57.691 ************************************ 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:57.691 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61686 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61686 /var/tmp/spdk-nbd.sock 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61686 ']' 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.692 13:30:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:57.951 [2024-11-20 13:30:57.151493] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:57.951 [2024-11-20 13:30:57.151617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.951 [2024-11-20 13:30:57.306825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.212 [2024-11-20 13:30:57.410686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:58.785 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.045 1+0 records in 00:10:59.045 1+0 records out 00:10:59.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137431 s, 3.0 MB/s 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:59.045 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.304 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.305 1+0 records in 00:10:59.305 1+0 records out 00:10:59.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839057 s, 4.9 MB/s 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.305 1+0 records in 00:10:59.305 1+0 records out 00:10:59.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958868 s, 4.3 MB/s 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:59.305 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.564 1+0 records in 00:10:59.564 1+0 records out 00:10:59.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118559 s, 3.5 MB/s 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:59.564 13:30:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.822 1+0 records in 00:10:59.822 1+0 records out 00:10:59.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403149 s, 10.2 MB/s 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:59.822 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.080 1+0 records in 00:11:00.080 1+0 records out 00:11:00.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041244 s, 9.9 MB/s 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:00.080 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.339 1+0 records in 00:11:00.339 1+0 records out 00:11:00.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044759 s, 9.2 MB/s 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:00.339 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd0", 00:11:00.622 "bdev_name": "Nvme0n1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd1", 00:11:00.622 "bdev_name": "Nvme1n1p1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd2", 00:11:00.622 "bdev_name": "Nvme1n1p2" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd3", 00:11:00.622 "bdev_name": "Nvme2n1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd4", 00:11:00.622 "bdev_name": "Nvme2n2" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd5", 00:11:00.622 "bdev_name": "Nvme2n3" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd6", 00:11:00.622 "bdev_name": "Nvme3n1" 00:11:00.622 } 00:11:00.622 ]' 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd0", 00:11:00.622 "bdev_name": "Nvme0n1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd1", 00:11:00.622 "bdev_name": "Nvme1n1p1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd2", 00:11:00.622 "bdev_name": "Nvme1n1p2" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd3", 00:11:00.622 "bdev_name": "Nvme2n1" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd4", 00:11:00.622 "bdev_name": "Nvme2n2" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd5", 00:11:00.622 "bdev_name": "Nvme2n3" 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "nbd_device": "/dev/nbd6", 00:11:00.622 "bdev_name": "Nvme3n1" 00:11:00.622 } 00:11:00.622 ]' 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.622 13:30:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.884 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.143 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.403 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.664 13:31:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.925 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.186 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:02.447 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:02.709 /dev/nbd0 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.709 1+0 records in 00:11:02.709 1+0 records out 00:11:02.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646718 s, 6.3 MB/s 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:02.709 13:31:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:02.709 /dev/nbd1 00:11:02.969 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:02.969 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.970 1+0 records in 00:11:02.970 1+0 records out 00:11:02.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127171 s, 3.2 MB/s 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:02.970 /dev/nbd10 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.970 1+0 records in 00:11:02.970 1+0 records out 00:11:02.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000919425 s, 4.5 MB/s 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.970 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:03.230 /dev/nbd11 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:03.230 1+0 records in 00:11:03.230 1+0 records out 00:11:03.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105742 s, 3.9 MB/s 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:03.230 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:03.490 /dev/nbd12 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:03.490 1+0 records in 00:11:03.490 1+0 records out 00:11:03.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117993 s, 3.5 MB/s 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:03.490 13:31:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:03.750 /dev/nbd13 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:03.750 1+0 records in 00:11:03.750 1+0 records out 00:11:03.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109593 s, 3.7 MB/s 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:03.750 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:04.011 /dev/nbd14 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:04.011 1+0 records in 00:11:04.011 1+0 records out 00:11:04.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120047 s, 3.4 MB/s 00:11:04.011 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd0", 00:11:04.274 "bdev_name": "Nvme0n1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd1", 00:11:04.274 "bdev_name": "Nvme1n1p1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd10", 00:11:04.274 "bdev_name": "Nvme1n1p2" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd11", 00:11:04.274 "bdev_name": "Nvme2n1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd12", 00:11:04.274 "bdev_name": "Nvme2n2" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd13", 00:11:04.274 "bdev_name": "Nvme2n3" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd14", 00:11:04.274 "bdev_name": "Nvme3n1" 00:11:04.274 } 00:11:04.274 ]' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd0", 00:11:04.274 "bdev_name": "Nvme0n1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd1", 00:11:04.274 "bdev_name": "Nvme1n1p1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd10", 00:11:04.274 "bdev_name": "Nvme1n1p2" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd11", 00:11:04.274 "bdev_name": "Nvme2n1" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd12", 00:11:04.274 "bdev_name": "Nvme2n2" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd13", 00:11:04.274 "bdev_name": "Nvme2n3" 00:11:04.274 }, 00:11:04.274 { 00:11:04.274 "nbd_device": "/dev/nbd14", 00:11:04.274 "bdev_name": "Nvme3n1" 00:11:04.274 } 00:11:04.274 ]' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:04.274 /dev/nbd1 00:11:04.274 /dev/nbd10 00:11:04.274 /dev/nbd11 00:11:04.274 /dev/nbd12 00:11:04.274 /dev/nbd13 00:11:04.274 /dev/nbd14' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:04.274 /dev/nbd1 00:11:04.274 /dev/nbd10 00:11:04.274 /dev/nbd11 00:11:04.274 /dev/nbd12 00:11:04.274 /dev/nbd13 00:11:04.274 /dev/nbd14' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:04.274 256+0 records in 00:11:04.274 256+0 records out 00:11:04.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709906 s, 148 MB/s 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:04.274 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:04.536 256+0 records in 00:11:04.536 256+0 records out 00:11:04.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249242 s, 4.2 MB/s 00:11:04.536 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:04.536 13:31:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:04.796 256+0 records in 00:11:04.796 256+0 records out 00:11:04.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242382 s, 4.3 MB/s 00:11:04.796 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:04.796 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:05.108 256+0 records in 00:11:05.108 256+0 records out 00:11:05.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258879 s, 4.1 MB/s 00:11:05.108 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.108 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:05.383 256+0 records in 00:11:05.383 256+0 records out 00:11:05.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248331 s, 4.2 MB/s 00:11:05.383 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.383 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:05.645 256+0 records in 00:11:05.645 256+0 records out 00:11:05.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214462 s, 4.9 MB/s 00:11:05.645 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.645 13:31:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:05.905 256+0 records in 00:11:05.905 256+0 records out 00:11:05.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222659 s, 4.7 MB/s 00:11:05.905 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.905 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:06.166 256+0 records in 00:11:06.166 256+0 records out 00:11:06.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234971 s, 4.5 MB/s 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.166 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.426 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.426 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.427 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.688 13:31:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.950 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.213 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.474 13:31:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.736 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:07.998 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:08.264 malloc_lvol_verify 00:11:08.264 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:08.525 e26f3447-1db7-424f-bcf8-6f81a199613d 00:11:08.525 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:08.785 2a015251-29bb-4511-8df4-e74103932289 00:11:08.785 13:31:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:08.785 /dev/nbd0 00:11:08.785 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:08.785 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:08.786 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:08.786 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:08.786 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:08.786 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.786 Discarding device blocks: 0/4096 done 00:11:08.786 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:08.786 00:11:08.786 Allocating group tables: 0/1 done 00:11:08.786 Writing inode tables: 0/1 done 00:11:09.060 Creating journal (1024 blocks): done 00:11:09.060 Writing superblocks and filesystem accounting information: 0/1 done 00:11:09.060 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61686 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61686 ']' 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61686 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61686 00:11:09.060 killing process with pid 61686 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61686' 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61686 00:11:09.060 13:31:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61686 00:11:10.002 ************************************ 00:11:10.002 END TEST bdev_nbd 00:11:10.002 ************************************ 00:11:10.002 13:31:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:10.002 00:11:10.002 real 0m12.172s 00:11:10.002 user 0m16.605s 00:11:10.002 sys 0m4.013s 00:11:10.002 13:31:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.002 13:31:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:10.002 skipping fio tests on NVMe due to multi-ns failures. 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:10.002 13:31:09 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:10.002 13:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:10.002 13:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.002 13:31:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.002 ************************************ 00:11:10.002 START TEST bdev_verify 00:11:10.002 ************************************ 00:11:10.002 13:31:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:10.002 [2024-11-20 13:31:09.388217] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:10.003 [2024-11-20 13:31:09.388344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:11:10.263 [2024-11-20 13:31:09.548954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:10.264 [2024-11-20 13:31:09.653759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.264 [2024-11-20 13:31:09.653857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.836 Running I/O for 5 seconds... 00:11:13.161 18304.00 IOPS, 71.50 MiB/s [2024-11-20T13:31:13.602Z] 18656.00 IOPS, 72.88 MiB/s [2024-11-20T13:31:14.545Z] 18496.00 IOPS, 72.25 MiB/s [2024-11-20T13:31:15.488Z] 18288.00 IOPS, 71.44 MiB/s [2024-11-20T13:31:15.488Z] 18304.00 IOPS, 71.50 MiB/s 00:11:16.061 Latency(us) 00:11:16.061 [2024-11-20T13:31:15.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.061 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0xbd0bd 00:11:16.061 Nvme0n1 : 5.11 1278.72 5.00 0.00 0.00 99849.92 22786.36 89935.56 00:11:16.061 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:16.061 Nvme0n1 : 5.10 1303.98 5.09 0.00 0.00 97369.36 18551.73 81062.99 00:11:16.061 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0x4ff80 00:11:16.061 Nvme1n1p1 : 5.11 1277.86 4.99 0.00 0.00 99770.13 25306.98 83886.08 00:11:16.061 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:16.061 Nvme1n1p1 : 5.11 1303.51 5.09 0.00 0.00 97211.76 16131.94 79449.80 00:11:16.061 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0x4ff7f 00:11:16.061 Nvme1n1p2 : 5.11 1277.27 4.99 0.00 0.00 99692.28 25206.15 82272.89 00:11:16.061 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:16.061 Nvme1n1p2 : 5.11 1302.41 5.09 0.00 0.00 97074.61 16131.94 79449.80 00:11:16.061 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0x80000 00:11:16.061 Nvme2n1 : 5.12 1275.81 4.98 0.00 0.00 99574.21 26416.05 83886.08 00:11:16.061 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x80000 length 0x80000 00:11:16.061 Nvme2n1 : 5.11 1301.69 5.08 0.00 0.00 96974.74 12048.54 80256.39 00:11:16.061 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0x80000 00:11:16.061 Nvme2n2 : 5.12 1275.31 4.98 0.00 0.00 99419.43 25105.33 86305.87 00:11:16.061 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x80000 length 0x80000 00:11:16.061 Nvme2n2 : 5.11 1301.30 5.08 0.00 0.00 96922.86 10637.00 82676.18 00:11:16.061 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.061 Verification LBA range: start 0x0 length 0x80000 00:11:16.062 Nvme2n3 : 5.12 1274.80 4.98 0.00 0.00 99267.34 25306.98 87919.06 00:11:16.062 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.062 Verification LBA range: start 0x80000 length 0x80000 00:11:16.062 Nvme2n3 : 5.10 1304.75 5.10 0.00 0.00 97898.54 21475.64 85902.57 00:11:16.062 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:16.062 Verification LBA range: start 0x0 length 0x20000 00:11:16.062 Nvme3n1 : 5.12 1274.32 4.98 0.00 0.00 99119.15 23290.49 90742.15 00:11:16.062 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:16.062 Verification LBA range: start 0x20000 length 0x20000 00:11:16.062 Nvme3n1 : 5.10 1304.35 5.10 0.00 0.00 97558.22 19156.68 83482.78 00:11:16.062 [2024-11-20T13:31:15.489Z] =================================================================================================================== 00:11:16.062 [2024-11-20T13:31:15.489Z] Total : 18056.09 70.53 0.00 0.00 98396.45 10637.00 90742.15 00:11:17.445 00:11:17.445 real 0m7.372s 00:11:17.445 user 0m13.536s 00:11:17.445 sys 0m0.208s 00:11:17.445 13:31:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.445 ************************************ 00:11:17.445 END TEST bdev_verify 00:11:17.445 ************************************ 00:11:17.445 13:31:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 13:31:16 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:17.445 13:31:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:17.445 13:31:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.445 13:31:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 ************************************ 00:11:17.445 START TEST bdev_verify_big_io 00:11:17.445 ************************************ 00:11:17.445 13:31:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:17.446 [2024-11-20 13:31:16.825077] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:17.446 [2024-11-20 13:31:16.825204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:11:17.705 [2024-11-20 13:31:16.987868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.706 [2024-11-20 13:31:17.102602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.706 [2024-11-20 13:31:17.102603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.646 Running I/O for 5 seconds... 00:11:21.449 16.00 IOPS, 1.00 MiB/s [2024-11-20T13:31:23.420Z] 977.50 IOPS, 61.09 MiB/s [2024-11-20T13:31:23.991Z] 1486.00 IOPS, 92.88 MiB/s [2024-11-20T13:31:24.252Z] 2021.00 IOPS, 126.31 MiB/s 00:11:24.825 Latency(us) 00:11:24.825 [2024-11-20T13:31:24.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0xbd0b 00:11:24.825 Nvme0n1 : 5.81 104.54 6.53 0.00 0.00 1183194.30 29239.14 1264743.98 00:11:24.825 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:24.825 Nvme0n1 : 5.90 97.66 6.10 0.00 0.00 1262584.04 22080.59 1290555.08 00:11:24.825 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x4ff8 00:11:24.825 Nvme1n1p1 : 5.90 103.74 6.48 0.00 0.00 1137161.63 97194.93 1129235.69 00:11:24.825 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:24.825 Nvme1n1p1 : 5.90 97.60 6.10 0.00 0.00 1229422.45 111310.38 1090519.04 00:11:24.825 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x4ff7 00:11:24.825 Nvme1n1p2 : 5.90 108.40 6.78 0.00 0.00 1072748.70 91145.45 993727.41 00:11:24.825 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:24.825 Nvme1n1p2 : 5.91 97.53 6.10 0.00 0.00 1195514.57 122602.73 1000180.18 00:11:24.825 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x8000 00:11:24.825 Nvme2n1 : 6.00 111.68 6.98 0.00 0.00 1010322.84 90338.86 1032444.06 00:11:24.825 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x8000 length 0x8000 00:11:24.825 Nvme2n1 : 6.10 99.80 6.24 0.00 0.00 1115715.92 95178.44 1006632.96 00:11:24.825 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x8000 00:11:24.825 Nvme2n2 : 6.00 111.48 6.97 0.00 0.00 979890.17 91548.75 1071160.71 00:11:24.825 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x8000 length 0x8000 00:11:24.825 Nvme2n2 : 6.11 94.34 5.90 0.00 0.00 1148923.45 93161.94 1935832.62 00:11:24.825 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x8000 00:11:24.825 Nvme2n3 : 6.14 120.78 7.55 0.00 0.00 877895.69 56461.78 1096971.82 00:11:24.825 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x8000 length 0x8000 00:11:24.825 Nvme2n3 : 6.17 106.62 6.66 0.00 0.00 995851.83 3755.72 1987454.82 00:11:24.825 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x0 length 0x2000 00:11:24.825 Nvme3n1 : 6.16 136.18 8.51 0.00 0.00 758403.42 1550.18 1148594.02 00:11:24.825 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:24.825 Verification LBA range: start 0x2000 length 0x2000 00:11:24.825 Nvme3n1 : 6.17 112.73 7.05 0.00 0.00 911438.01 13913.80 2039077.02 00:11:24.825 [2024-11-20T13:31:24.252Z] =================================================================================================================== 00:11:24.825 [2024-11-20T13:31:24.252Z] Total : 1503.09 93.94 0.00 0.00 1047507.99 1550.18 2039077.02 00:11:26.211 00:11:26.211 real 0m8.775s 00:11:26.211 user 0m16.349s 00:11:26.211 sys 0m0.248s 00:11:26.211 13:31:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.211 ************************************ 00:11:26.211 13:31:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.211 END TEST bdev_verify_big_io 00:11:26.211 ************************************ 00:11:26.211 13:31:25 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:26.211 13:31:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:26.211 13:31:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.211 13:31:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.211 ************************************ 00:11:26.211 START TEST bdev_write_zeroes 00:11:26.211 ************************************ 00:11:26.211 13:31:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:26.471 [2024-11-20 13:31:25.650357] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:26.471 [2024-11-20 13:31:25.650485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:11:26.471 [2024-11-20 13:31:25.814280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.732 [2024-11-20 13:31:25.920765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.302 Running I/O for 1 seconds... 00:11:28.281 51968.00 IOPS, 203.00 MiB/s 00:11:28.281 Latency(us) 00:11:28.281 [2024-11-20T13:31:27.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.281 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme0n1 : 1.02 7439.00 29.06 0.00 0.00 17166.05 13107.20 26819.35 00:11:28.281 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme1n1p1 : 1.03 7429.69 29.02 0.00 0.00 17163.56 12905.55 27222.65 00:11:28.281 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme1n1p2 : 1.03 7420.33 28.99 0.00 0.00 17130.07 13107.20 25004.50 00:11:28.281 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme2n1 : 1.03 7411.86 28.95 0.00 0.00 17039.55 11846.89 24500.38 00:11:28.281 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme2n2 : 1.03 7403.29 28.92 0.00 0.00 17029.97 11494.01 24399.56 00:11:28.281 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme2n3 : 1.03 7394.86 28.89 0.00 0.00 17019.26 10334.52 24298.73 00:11:28.281 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:28.281 Nvme3n1 : 1.03 7386.45 28.85 0.00 0.00 17009.75 10082.46 26012.75 00:11:28.281 [2024-11-20T13:31:27.708Z] =================================================================================================================== 00:11:28.281 [2024-11-20T13:31:27.708Z] Total : 51885.48 202.68 0.00 0.00 17079.74 10082.46 27222.65 00:11:29.223 00:11:29.223 real 0m2.730s 00:11:29.223 user 0m2.418s 00:11:29.223 sys 0m0.193s 00:11:29.223 13:31:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.223 13:31:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:29.223 ************************************ 00:11:29.223 END TEST bdev_write_zeroes 00:11:29.223 ************************************ 00:11:29.223 13:31:28 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.223 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:29.223 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.223 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.223 ************************************ 00:11:29.223 START TEST bdev_json_nonenclosed 00:11:29.223 ************************************ 00:11:29.223 13:31:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.223 [2024-11-20 13:31:28.448161] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:29.223 [2024-11-20 13:31:28.448281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62373 ] 00:11:29.223 [2024-11-20 13:31:28.610580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.483 [2024-11-20 13:31:28.715922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.483 [2024-11-20 13:31:28.716023] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:29.483 [2024-11-20 13:31:28.716041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:29.483 [2024-11-20 13:31:28.716050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:29.483 00:11:29.483 real 0m0.517s 00:11:29.483 user 0m0.312s 00:11:29.483 sys 0m0.100s 00:11:29.483 ************************************ 00:11:29.483 END TEST bdev_json_nonenclosed 00:11:29.483 ************************************ 00:11:29.483 13:31:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.483 13:31:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:29.743 13:31:28 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.743 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:29.743 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.743 13:31:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.743 ************************************ 00:11:29.743 START TEST bdev_json_nonarray 00:11:29.743 ************************************ 00:11:29.743 13:31:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.743 [2024-11-20 13:31:29.049523] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:29.743 [2024-11-20 13:31:29.049686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62399 ] 00:11:30.001 [2024-11-20 13:31:29.224933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.001 [2024-11-20 13:31:29.328018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.001 [2024-11-20 13:31:29.328127] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:30.001 [2024-11-20 13:31:29.328145] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:30.001 [2024-11-20 13:31:29.328155] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.261 00:11:30.261 real 0m0.553s 00:11:30.261 user 0m0.339s 00:11:30.261 sys 0m0.109s 00:11:30.261 13:31:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.261 ************************************ 00:11:30.261 END TEST bdev_json_nonarray 00:11:30.261 ************************************ 00:11:30.261 13:31:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:30.261 13:31:29 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:30.261 13:31:29 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:30.261 13:31:29 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:30.262 13:31:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:30.262 13:31:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.262 13:31:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:30.262 ************************************ 00:11:30.262 START TEST bdev_gpt_uuid 00:11:30.262 ************************************ 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62424 00:11:30.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62424 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62424 ']' 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.262 13:31:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:30.262 [2024-11-20 13:31:29.662076] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:30.262 [2024-11-20 13:31:29.662386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62424 ] 00:11:30.524 [2024-11-20 13:31:29.822318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.524 [2024-11-20 13:31:29.924985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.468 Some configs were skipped because the RPC state that can call them passed over. 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.468 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:31.468 { 00:11:31.468 "name": "Nvme1n1p1", 00:11:31.468 "aliases": [ 00:11:31.468 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:31.468 ], 00:11:31.468 "product_name": "GPT Disk", 00:11:31.468 "block_size": 4096, 00:11:31.468 "num_blocks": 655104, 00:11:31.468 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:31.468 "assigned_rate_limits": { 00:11:31.468 "rw_ios_per_sec": 0, 00:11:31.468 "rw_mbytes_per_sec": 0, 00:11:31.468 "r_mbytes_per_sec": 0, 00:11:31.468 "w_mbytes_per_sec": 0 00:11:31.468 }, 00:11:31.468 "claimed": false, 00:11:31.468 "zoned": false, 00:11:31.468 "supported_io_types": { 00:11:31.468 "read": true, 00:11:31.468 "write": true, 00:11:31.468 "unmap": true, 00:11:31.468 "flush": true, 00:11:31.468 "reset": true, 00:11:31.468 "nvme_admin": false, 00:11:31.468 "nvme_io": false, 00:11:31.468 "nvme_io_md": false, 00:11:31.468 "write_zeroes": true, 00:11:31.468 "zcopy": false, 00:11:31.468 "get_zone_info": false, 00:11:31.468 "zone_management": false, 00:11:31.468 "zone_append": false, 00:11:31.468 "compare": true, 00:11:31.468 "compare_and_write": false, 00:11:31.468 "abort": true, 00:11:31.468 "seek_hole": false, 00:11:31.468 "seek_data": false, 00:11:31.468 "copy": true, 00:11:31.468 "nvme_iov_md": false 00:11:31.468 }, 00:11:31.468 "driver_specific": { 00:11:31.468 "gpt": { 00:11:31.468 "base_bdev": "Nvme1n1", 00:11:31.468 "offset_blocks": 256, 00:11:31.468 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:31.469 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:31.469 "partition_name": "SPDK_TEST_first" 00:11:31.469 } 00:11:31.469 } 00:11:31.469 } 00:11:31.469 ]' 00:11:31.469 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.730 13:31:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.730 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.730 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:31.730 { 00:11:31.730 "name": "Nvme1n1p2", 00:11:31.730 "aliases": [ 00:11:31.730 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:31.730 ], 00:11:31.730 "product_name": "GPT Disk", 00:11:31.730 "block_size": 4096, 00:11:31.730 "num_blocks": 655103, 00:11:31.730 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:31.730 "assigned_rate_limits": { 00:11:31.730 "rw_ios_per_sec": 0, 00:11:31.730 "rw_mbytes_per_sec": 0, 00:11:31.730 "r_mbytes_per_sec": 0, 00:11:31.730 "w_mbytes_per_sec": 0 00:11:31.730 }, 00:11:31.730 "claimed": false, 00:11:31.730 "zoned": false, 00:11:31.730 "supported_io_types": { 00:11:31.730 "read": true, 00:11:31.730 "write": true, 00:11:31.730 "unmap": true, 00:11:31.730 "flush": true, 00:11:31.730 "reset": true, 00:11:31.730 "nvme_admin": false, 00:11:31.730 "nvme_io": false, 00:11:31.730 "nvme_io_md": false, 00:11:31.730 "write_zeroes": true, 00:11:31.730 "zcopy": false, 00:11:31.730 "get_zone_info": false, 00:11:31.730 "zone_management": false, 00:11:31.730 "zone_append": false, 00:11:31.731 "compare": true, 00:11:31.731 "compare_and_write": false, 00:11:31.731 "abort": true, 00:11:31.731 "seek_hole": false, 00:11:31.731 "seek_data": false, 00:11:31.731 "copy": true, 00:11:31.731 "nvme_iov_md": false 00:11:31.731 }, 00:11:31.731 "driver_specific": { 00:11:31.731 "gpt": { 00:11:31.731 "base_bdev": "Nvme1n1", 00:11:31.731 "offset_blocks": 655360, 00:11:31.731 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:31.731 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:31.731 "partition_name": "SPDK_TEST_second" 00:11:31.731 } 00:11:31.731 } 00:11:31.731 } 00:11:31.731 ]' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62424 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62424 ']' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62424 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62424 00:11:31.731 killing process with pid 62424 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62424' 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62424 00:11:31.731 13:31:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62424 00:11:33.646 00:11:33.646 real 0m3.062s 00:11:33.646 user 0m3.239s 00:11:33.646 sys 0m0.362s 00:11:33.646 13:31:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.646 13:31:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:33.646 ************************************ 00:11:33.646 END TEST bdev_gpt_uuid 00:11:33.646 ************************************ 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:33.646 13:31:32 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:33.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.907 Waiting for block devices as requested 00:11:33.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.167 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.167 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.469 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:39.469 13:31:38 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:39.469 13:31:38 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:39.729 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:39.729 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:39.729 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:39.729 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:39.729 13:31:38 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:39.729 00:11:39.729 real 0m58.010s 00:11:39.729 user 1m13.179s 00:11:39.729 sys 0m8.237s 00:11:39.729 13:31:38 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.729 ************************************ 00:11:39.729 END TEST blockdev_nvme_gpt 00:11:39.729 ************************************ 00:11:39.729 13:31:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:39.729 13:31:38 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:39.729 13:31:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.729 13:31:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.729 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:11:39.729 ************************************ 00:11:39.729 START TEST nvme 00:11:39.729 ************************************ 00:11:39.729 13:31:39 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:39.729 * Looking for test storage... 00:11:39.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.729 13:31:39 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.729 13:31:39 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.729 13:31:39 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.729 13:31:39 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.729 13:31:39 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.729 13:31:39 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.729 13:31:39 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.729 13:31:39 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.729 13:31:39 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.729 13:31:39 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.729 13:31:39 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.729 13:31:39 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.729 13:31:39 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.729 13:31:39 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.729 13:31:39 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.729 13:31:39 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:39.729 13:31:39 nvme -- scripts/common.sh@345 -- # : 1 00:11:39.729 13:31:39 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.729 13:31:39 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.988 13:31:39 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:39.988 13:31:39 nvme -- scripts/common.sh@353 -- # local d=1 00:11:39.988 13:31:39 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.988 13:31:39 nvme -- scripts/common.sh@355 -- # echo 1 00:11:39.988 13:31:39 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.988 13:31:39 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:39.988 13:31:39 nvme -- scripts/common.sh@353 -- # local d=2 00:11:39.988 13:31:39 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.988 13:31:39 nvme -- scripts/common.sh@355 -- # echo 2 00:11:39.988 13:31:39 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.988 13:31:39 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.988 13:31:39 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.988 13:31:39 nvme -- scripts/common.sh@368 -- # return 0 00:11:39.988 13:31:39 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.988 13:31:39 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.988 --rc genhtml_branch_coverage=1 00:11:39.988 --rc genhtml_function_coverage=1 00:11:39.988 --rc genhtml_legend=1 00:11:39.988 --rc geninfo_all_blocks=1 00:11:39.988 --rc geninfo_unexecuted_blocks=1 00:11:39.988 00:11:39.988 ' 00:11:39.988 13:31:39 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.988 --rc genhtml_branch_coverage=1 00:11:39.988 --rc genhtml_function_coverage=1 00:11:39.988 --rc genhtml_legend=1 00:11:39.988 --rc geninfo_all_blocks=1 00:11:39.988 --rc geninfo_unexecuted_blocks=1 00:11:39.988 00:11:39.988 ' 00:11:39.988 13:31:39 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.988 --rc genhtml_branch_coverage=1 00:11:39.988 --rc genhtml_function_coverage=1 00:11:39.988 --rc genhtml_legend=1 00:11:39.988 --rc geninfo_all_blocks=1 00:11:39.988 --rc geninfo_unexecuted_blocks=1 00:11:39.988 00:11:39.988 ' 00:11:39.988 13:31:39 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.988 --rc genhtml_branch_coverage=1 00:11:39.988 --rc genhtml_function_coverage=1 00:11:39.988 --rc genhtml_legend=1 00:11:39.988 --rc geninfo_all_blocks=1 00:11:39.988 --rc geninfo_unexecuted_blocks=1 00:11:39.988 00:11:39.988 ' 00:11:39.988 13:31:39 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:40.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.817 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.817 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.817 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.817 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.077 13:31:40 nvme -- nvme/nvme.sh@79 -- # uname 00:11:41.077 13:31:40 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:41.077 13:31:40 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:41.077 13:31:40 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:41.077 Waiting for stub to ready for secondary processes... 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1075 -- # stubpid=63067 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63067 ]] 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:41.077 13:31:40 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:41.077 [2024-11-20 13:31:40.308566] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:41.077 [2024-11-20 13:31:40.308692] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:41.684 [2024-11-20 13:31:41.069158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.963 [2024-11-20 13:31:41.166389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.963 [2024-11-20 13:31:41.166962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.963 [2024-11-20 13:31:41.167027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.963 [2024-11-20 13:31:41.181810] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:41.964 [2024-11-20 13:31:41.181842] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:41.964 [2024-11-20 13:31:41.193073] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:41.964 [2024-11-20 13:31:41.193185] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:41.964 [2024-11-20 13:31:41.195748] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:41.964 [2024-11-20 13:31:41.195932] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:41.964 [2024-11-20 13:31:41.196114] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:41.964 [2024-11-20 13:31:41.197866] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:41.964 [2024-11-20 13:31:41.198020] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:41.964 [2024-11-20 13:31:41.198078] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:41.964 [2024-11-20 13:31:41.201180] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:41.964 [2024-11-20 13:31:41.201347] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:41.964 [2024-11-20 13:31:41.201408] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:41.964 [2024-11-20 13:31:41.201448] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:41.964 [2024-11-20 13:31:41.201486] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:41.964 done. 00:11:41.964 13:31:41 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:41.964 13:31:41 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:41.964 13:31:41 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:41.964 13:31:41 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:41.964 13:31:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.964 13:31:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.964 ************************************ 00:11:41.964 START TEST nvme_reset 00:11:41.964 ************************************ 00:11:41.964 13:31:41 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:42.224 Initializing NVMe Controllers 00:11:42.224 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:42.224 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:42.224 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:42.224 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:42.224 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:42.224 ************************************ 00:11:42.224 END TEST nvme_reset 00:11:42.224 ************************************ 00:11:42.224 00:11:42.224 real 0m0.204s 00:11:42.224 user 0m0.081s 00:11:42.224 sys 0m0.088s 00:11:42.224 13:31:41 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.224 13:31:41 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:42.224 13:31:41 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:42.224 13:31:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.224 13:31:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.224 13:31:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:42.224 ************************************ 00:11:42.224 START TEST nvme_identify 00:11:42.224 ************************************ 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:42.224 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:42.224 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:42.224 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:42.224 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:42.224 13:31:41 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:42.224 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:42.488 ===================================================== 00:11:42.488 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:42.488 ===================================================== 00:11:42.488 Controller Capabilities/Features 00:11:42.488 ================================ 00:11:42.488 Vendor ID: 1b36 00:11:42.488 Subsystem Vendor ID: 1af4 00:11:42.488 Serial Number: 12341 00:11:42.488 Model Number: QEMU NVMe Ctrl 00:11:42.488 Firmware Version: 8.0.0 00:11:42.488 Recommended Arb Burst: 6 00:11:42.488 IEEE OUI Identifier: 00 54 52 00:11:42.488 Multi-path I/O 00:11:42.488 May have multiple subsystem ports: No 00:11:42.488 May have multiple controllers: No 00:11:42.488 Associated with SR-IOV VF: No 00:11:42.488 Max Data Transfer Size: 524288 00:11:42.488 Max Number of Namespaces: 256 00:11:42.488 Max Number of I/O Queues: 64 00:11:42.488 NVMe Specification Version (VS): 1.4 00:11:42.488 NVMe Specification Version (Identify): 1.4 00:11:42.488 Maximum Queue Entries: 2048 00:11:42.488 Contiguous Queues Required: Yes 00:11:42.488 Arbitration Mechanisms Supported 00:11:42.488 Weighted Round Robin: Not Supported 00:11:42.488 Vendor Specific: Not Supported 00:11:42.488 Reset Timeout: 7500 ms 00:11:42.488 Doorbell Stride: 4 bytes 00:11:42.488 NVM Subsystem Reset: Not Supported 00:11:42.488 Command Sets Supported 00:11:42.488 NVM Command Set: Supported 00:11:42.488 Boot Partition: Not Supported 00:11:42.488 Memory Page Size Minimum: 4096 bytes 00:11:42.488 Memory Page Size Maximum: 65536 bytes 00:11:42.488 Persistent Memory Region: Not Supported 00:11:42.488 Optional Asynchronous Events Supported 00:11:42.488 Namespace Attribute Notices: Supported 00:11:42.488 Firmware Activation Notices: Not Supported 00:11:42.488 ANA Change Notices: Not Supported 00:11:42.488 PLE Aggregate Log Change Notices: Not Supported 00:11:42.488 LBA Status Info Alert Notices: Not Supported 00:11:42.488 EGE Aggregate Log Change Notices: Not Supported 00:11:42.488 Normal NVM Subsystem Shutdown event: Not Supported 00:11:42.488 Zone Descriptor Change Notices: Not Supported 00:11:42.488 Discovery Log Change Notices: Not Supported 00:11:42.488 Controller Attributes 00:11:42.488 128-bit Host Identifier: Not Supported 00:11:42.488 Non-Operational Permissive Mode: Not Supported 00:11:42.488 NVM Sets: Not Supported 00:11:42.488 Read Recovery Levels: Not Supported 00:11:42.488 Endurance Groups: Not Supported 00:11:42.488 Predictable Latency Mode: Not Supported 00:11:42.488 Traffic Based Keep ALive: Not Supported 00:11:42.488 Namespace Granularity: Not Supported 00:11:42.488 SQ Associations: Not Supported 00:11:42.488 UUID List: Not Supported 00:11:42.488 Multi-Domain Subsystem: Not Supported 00:11:42.488 Fixed Capacity Management: Not Supported 00:11:42.488 Variable Capacity Management: Not Supported 00:11:42.488 Delete Endurance Group: Not Supported 00:11:42.488 Delete NVM Set: Not Supported 00:11:42.488 Extended LBA Formats Supported: Supported 00:11:42.488 Flexible Data Placement Supported: Not Supported 00:11:42.488 00:11:42.488 Controller Memory Buffer Support 00:11:42.488 ================================ 00:11:42.488 Supported: No 00:11:42.488 00:11:42.488 Persistent Memory Region Support 00:11:42.488 ================================ 00:11:42.488 Supported: No 00:11:42.488 00:11:42.488 Admin Command Set Attributes 00:11:42.488 ============================ 00:11:42.488 Security Send/Receive: Not Supported 00:11:42.488 Format NVM: Supported 00:11:42.488 Firmware Activate/Download: Not Supported 00:11:42.488 Namespace Management: Supported 00:11:42.488 Device Self-Test: Not Supported 00:11:42.488 Directives: Supported 00:11:42.488 NVMe-MI: Not Supported 00:11:42.488 Virtualization Management: Not Supported 00:11:42.488 Doorbell Buffer Config: Supported 00:11:42.488 Get LBA Status Capability: Not Supported 00:11:42.488 Command & Feature Lockdown Capability: Not Supported 00:11:42.488 Abort Command Limit: 4 00:11:42.488 Async Event Request Limit: 4 00:11:42.488 Number of Firmware Slots: N/A 00:11:42.488 Firmware Slot 1 Read-Only: N/A 00:11:42.488 Firmware Activation Without Reset: N/A 00:11:42.488 Multiple Update Detection Support: N/A 00:11:42.488 Firmware Update Granularity: No Information Provided 00:11:42.488 Per-Namespace SMART Log: Yes 00:11:42.488 Asymmetric Namespace Access Log Page: Not Supported 00:11:42.488 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:42.488 Command Effects Log Page: Supported 00:11:42.488 Get Log Page Extended Data: Supported 00:11:42.488 Telemetry Log Pages: Not Supported 00:11:42.488 Persistent Event Log Pages: Not Supported 00:11:42.488 Supported Log Pages Log Page: May Support 00:11:42.488 Commands Supported & Effects Log Page: Not Supported 00:11:42.488 Feature Identifiers & Effects Log Page:May Support 00:11:42.488 NVMe-MI Commands & Effects Log Page: May Support 00:11:42.488 Data Area 4 for Telemetry Log: Not Supported 00:11:42.488 Error Log Page Entries Supported: 1 00:11:42.488 Keep Alive: Not Supported 00:11:42.488 00:11:42.488 NVM Command Set Attributes 00:11:42.488 ========================== 00:11:42.488 Submission Queue Entry Size 00:11:42.488 Max: 64 00:11:42.488 Min: 64 00:11:42.488 Completion Queue Entry Size 00:11:42.488 Max: 16 00:11:42.488 Min: 16 00:11:42.488 Number of Namespaces: 256 00:11:42.488 Compare Command: Supported 00:11:42.488 Write Uncorrectable Command: Not Supported 00:11:42.488 Dataset Management Command: Supported 00:11:42.488 Write Zeroes Command: Supported 00:11:42.488 Set Features Save Field: Supported 00:11:42.488 Reservations: Not Supported 00:11:42.488 Timestamp: Supported 00:11:42.488 Copy: Supported 00:11:42.488 Volatile Write Cache: Present 00:11:42.488 Atomic Write Unit (Normal): 1 00:11:42.488 Atomic Write Unit (PFail): 1 00:11:42.488 Atomic Compare & Write Unit: 1 00:11:42.488 Fused Compare & Write: Not Supported 00:11:42.488 Scatter-Gather List 00:11:42.488 SGL Command Set: Supported 00:11:42.488 SGL Keyed: Not Supported 00:11:42.488 SGL Bit Bucket Descriptor: Not Supported 00:11:42.488 SGL Metadata Pointer: Not Supported 00:11:42.488 Oversized SGL: Not Supported 00:11:42.488 SGL Metadata Address: Not Supported 00:11:42.488 SGL Offset: Not Supported 00:11:42.488 Transport SGL Data Block: Not Supported 00:11:42.488 Replay Protected Memory Block: Not Supported 00:11:42.488 00:11:42.488 Firmware Slot Information 00:11:42.488 ========================= 00:11:42.488 Active slot: 1 00:11:42.488 Slot 1 Firmware Revision: 1.0 00:11:42.488 00:11:42.488 00:11:42.488 Commands Supported and Effects 00:11:42.488 ============================== 00:11:42.488 Admin Commands 00:11:42.488 -------------- 00:11:42.488 Delete I/O Submission Queue (00h): Supported 00:11:42.488 Create I/O Submission Queue (01h): Supported 00:11:42.488 Get Log Page (02h): Supported 00:11:42.488 Delete I/O Completion Queue (04h): Supported 00:11:42.488 Create I/O Completion Queue (05h): Supported 00:11:42.488 Identify (06h): Supported 00:11:42.488 Abort (08h): Supported 00:11:42.488 Set Features (09h): Supported 00:11:42.488 Get Features (0Ah): Supported 00:11:42.488 Asynchronous Event Request (0Ch): Supported 00:11:42.488 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:42.488 Directive Send (19h): Supported 00:11:42.488 Directive Receive (1Ah): Supported 00:11:42.488 Virtualization Management (1Ch): Supported 00:11:42.488 Doorbell Buffer Config (7Ch): Supported 00:11:42.488 Format NVM (80h): Supported LBA-Change 00:11:42.488 I/O Commands 00:11:42.488 ------------ 00:11:42.488 Flush (00h): Supported LBA-Change 00:11:42.488 Write (01h): Supported LBA-Change 00:11:42.488 Read (02h): Supported 00:11:42.488 Compare (05h): Supported 00:11:42.488 Write Zeroes (08h): Supported LBA-Change 00:11:42.488 Dataset Management (09h): Supported LBA-Change 00:11:42.488 Unknown (0Ch): Supported 00:11:42.488 Unknown (12h): Supported 00:11:42.488 Copy (19h): Supported LBA-Change 00:11:42.488 Unknown (1Dh): Supported LBA-Change 00:11:42.488 00:11:42.488 Error Log 00:11:42.488 ========= 00:11:42.488 00:11:42.488 Arbitration 00:11:42.489 =========== 00:11:42.489 Arbitration Burst: no limit 00:11:42.489 00:11:42.489 Power Management 00:11:42.489 ================ 00:11:42.489 Number of Power States: 1 00:11:42.489 Current Power State: Power State #0 00:11:42.489 Power State #0: 00:11:42.489 Max Power: 25.00 W 00:11:42.489 Non-Operational State: Operational 00:11:42.489 Entry Latency: 16 microseconds 00:11:42.489 Exit Latency: 4 microseconds 00:11:42.489 Relative Read Throughput: 0 00:11:42.489 Relative Read Latency: 0 00:11:42.489 Relative Write Throughput: 0 00:11:42.489 Relative Write Latency: 0 00:11:42.489 Idle Power[2024-11-20 13:31:41.811359] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63089 terminated unexpected 00:11:42.489 [2024-11-20 13:31:41.812667] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63089 terminated unexpected 00:11:42.489 : Not Reported 00:11:42.489 Active Power: Not Reported 00:11:42.489 Non-Operational Permissive Mode: Not Supported 00:11:42.489 00:11:42.489 Health Information 00:11:42.489 ================== 00:11:42.489 Critical Warnings: 00:11:42.489 Available Spare Space: OK 00:11:42.489 Temperature: OK 00:11:42.489 Device Reliability: OK 00:11:42.489 Read Only: No 00:11:42.489 Volatile Memory Backup: OK 00:11:42.489 Current Temperature: 323 Kelvin (50 Celsius) 00:11:42.489 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:42.489 Available Spare: 0% 00:11:42.489 Available Spare Threshold: 0% 00:11:42.489 Life Percentage Used: 0% 00:11:42.489 Data Units Read: 1001 00:11:42.489 Data Units Written: 869 00:11:42.489 Host Read Commands: 52178 00:11:42.489 Host Write Commands: 50956 00:11:42.489 Controller Busy Time: 0 minutes 00:11:42.489 Power Cycles: 0 00:11:42.489 Power On Hours: 0 hours 00:11:42.489 Unsafe Shutdowns: 0 00:11:42.489 Unrecoverable Media Errors: 0 00:11:42.489 Lifetime Error Log Entries: 0 00:11:42.489 Warning Temperature Time: 0 minutes 00:11:42.489 Critical Temperature Time: 0 minutes 00:11:42.489 00:11:42.489 Number of Queues 00:11:42.489 ================ 00:11:42.489 Number of I/O Submission Queues: 64 00:11:42.489 Number of I/O Completion Queues: 64 00:11:42.489 00:11:42.489 ZNS Specific Controller Data 00:11:42.489 ============================ 00:11:42.489 Zone Append Size Limit: 0 00:11:42.489 00:11:42.489 00:11:42.489 Active Namespaces 00:11:42.489 ================= 00:11:42.489 Namespace ID:1 00:11:42.489 Error Recovery Timeout: Unlimited 00:11:42.489 Command Set Identifier: NVM (00h) 00:11:42.489 Deallocate: Supported 00:11:42.489 Deallocated/Unwritten Error: Supported 00:11:42.489 Deallocated Read Value: All 0x00 00:11:42.489 Deallocate in Write Zeroes: Not Supported 00:11:42.489 Deallocated Guard Field: 0xFFFF 00:11:42.489 Flush: Supported 00:11:42.489 Reservation: Not Supported 00:11:42.489 Namespace Sharing Capabilities: Private 00:11:42.489 Size (in LBAs): 1310720 (5GiB) 00:11:42.489 Capacity (in LBAs): 1310720 (5GiB) 00:11:42.489 Utilization (in LBAs): 1310720 (5GiB) 00:11:42.489 Thin Provisioning: Not Supported 00:11:42.489 Per-NS Atomic Units: No 00:11:42.489 Maximum Single Source Range Length: 128 00:11:42.489 Maximum Copy Length: 128 00:11:42.489 Maximum Source Range Count: 128 00:11:42.489 NGUID/EUI64 Never Reused: No 00:11:42.489 Namespace Write Protected: No 00:11:42.489 Number of LBA Formats: 8 00:11:42.489 Current LBA Format: LBA Format #04 00:11:42.489 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.489 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.489 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.489 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.489 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.489 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.489 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.489 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.489 00:11:42.489 NVM Specific Namespace Data 00:11:42.489 =========================== 00:11:42.489 Logical Block Storage Tag Mask: 0 00:11:42.489 Protection Information Capabilities: 00:11:42.489 16b Guard Protection Information Storage Tag Support: No 00:11:42.489 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.489 Storage Tag Check Read Support: No 00:11:42.489 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.489 ===================================================== 00:11:42.489 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:42.489 ===================================================== 00:11:42.489 Controller Capabilities/Features 00:11:42.489 ================================ 00:11:42.489 Vendor ID: 1b36 00:11:42.489 Subsystem Vendor ID: 1af4 00:11:42.489 Serial Number: 12343 00:11:42.489 Model Number: QEMU NVMe Ctrl 00:11:42.489 Firmware Version: 8.0.0 00:11:42.489 Recommended Arb Burst: 6 00:11:42.489 IEEE OUI Identifier: 00 54 52 00:11:42.489 Multi-path I/O 00:11:42.489 May have multiple subsystem ports: No 00:11:42.489 May have multiple controllers: Yes 00:11:42.489 Associated with SR-IOV VF: No 00:11:42.489 Max Data Transfer Size: 524288 00:11:42.489 Max Number of Namespaces: 256 00:11:42.489 Max Number of I/O Queues: 64 00:11:42.489 NVMe Specification Version (VS): 1.4 00:11:42.489 NVMe Specification Version (Identify): 1.4 00:11:42.489 Maximum Queue Entries: 2048 00:11:42.489 Contiguous Queues Required: Yes 00:11:42.489 Arbitration Mechanisms Supported 00:11:42.489 Weighted Round Robin: Not Supported 00:11:42.489 Vendor Specific: Not Supported 00:11:42.489 Reset Timeout: 7500 ms 00:11:42.489 Doorbell Stride: 4 bytes 00:11:42.489 NVM Subsystem Reset: Not Supported 00:11:42.489 Command Sets Supported 00:11:42.489 NVM Command Set: Supported 00:11:42.489 Boot Partition: Not Supported 00:11:42.489 Memory Page Size Minimum: 4096 bytes 00:11:42.489 Memory Page Size Maximum: 65536 bytes 00:11:42.489 Persistent Memory Region: Not Supported 00:11:42.489 Optional Asynchronous Events Supported 00:11:42.489 Namespace Attribute Notices: Supported 00:11:42.489 Firmware Activation Notices: Not Supported 00:11:42.489 ANA Change Notices: Not Supported 00:11:42.489 PLE Aggregate Log Change Notices: Not Supported 00:11:42.489 LBA Status Info Alert Notices: Not Supported 00:11:42.489 EGE Aggregate Log Change Notices: Not Supported 00:11:42.489 Normal NVM Subsystem Shutdown event: Not Supported 00:11:42.489 Zone Descriptor Change Notices: Not Supported 00:11:42.489 Discovery Log Change Notices: Not Supported 00:11:42.489 Controller Attributes 00:11:42.489 128-bit Host Identifier: Not Supported 00:11:42.489 Non-Operational Permissive Mode: Not Supported 00:11:42.489 NVM Sets: Not Supported 00:11:42.489 Read Recovery Levels: Not Supported 00:11:42.489 Endurance Groups: Supported 00:11:42.489 Predictable Latency Mode: Not Supported 00:11:42.489 Traffic Based Keep ALive: Not Supported 00:11:42.489 Namespace Granularity: Not Supported 00:11:42.489 SQ Associations: Not Supported 00:11:42.489 UUID List: Not Supported 00:11:42.489 Multi-Domain Subsystem: Not Supported 00:11:42.489 Fixed Capacity Management: Not Supported 00:11:42.489 Variable Capacity Management: Not Supported 00:11:42.489 Delete Endurance Group: Not Supported 00:11:42.489 Delete NVM Set: Not Supported 00:11:42.489 Extended LBA Formats Supported: Supported 00:11:42.489 Flexible Data Placement Supported: Supported 00:11:42.489 00:11:42.489 Controller Memory Buffer Support 00:11:42.489 ================================ 00:11:42.489 Supported: No 00:11:42.489 00:11:42.489 Persistent Memory Region Support 00:11:42.489 ================================ 00:11:42.489 Supported: No 00:11:42.489 00:11:42.489 Admin Command Set Attributes 00:11:42.489 ============================ 00:11:42.489 Security Send/Receive: Not Supported 00:11:42.489 Format NVM: Supported 00:11:42.489 Firmware Activate/Download: Not Supported 00:11:42.489 Namespace Management: Supported 00:11:42.489 Device Self-Test: Not Supported 00:11:42.489 Directives: Supported 00:11:42.489 NVMe-MI: Not Supported 00:11:42.489 Virtualization Management: Not Supported 00:11:42.489 Doorbell Buffer Config: Supported 00:11:42.489 Get LBA Status Capability: Not Supported 00:11:42.489 Command & Feature Lockdown Capability: Not Supported 00:11:42.489 Abort Command Limit: 4 00:11:42.489 Async Event Request Limit: 4 00:11:42.490 Number of Firmware Slots: N/A 00:11:42.490 Firmware Slot 1 Read-Only: N/A 00:11:42.490 Firmware Activation Without Reset: N/A 00:11:42.490 Multiple Update Detection Support: N/A 00:11:42.490 Firmware Update Granularity: No Information Provided 00:11:42.490 Per-Namespace SMART Log: Yes 00:11:42.490 Asymmetric Namespace Access Log Page: Not Supported 00:11:42.490 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:42.490 Command Effects Log Page: Supported 00:11:42.490 Get Log Page Extended Data: Supported 00:11:42.490 Telemetry Log Pages: Not Supported 00:11:42.490 Persistent Event Log Pages: Not Supported 00:11:42.490 Supported Log Pages Log Page: May Support 00:11:42.490 Commands Supported & Effects Log Page: Not Supported 00:11:42.490 Feature Identifiers & Effects Log Page:May Support 00:11:42.490 NVMe-MI Commands & Effects Log Page: May Support 00:11:42.490 Data Area 4 for Telemetry Log: Not Supported 00:11:42.490 Error Log Page Entries Supported: 1 00:11:42.490 Keep Alive: Not Supported 00:11:42.490 00:11:42.490 NVM Command Set Attributes 00:11:42.490 ========================== 00:11:42.490 Submission Queue Entry Size 00:11:42.490 Max: 64 00:11:42.490 Min: 64 00:11:42.490 Completion Queue Entry Size 00:11:42.490 Max: 16 00:11:42.490 Min: 16 00:11:42.490 Number of Namespaces: 256 00:11:42.490 Compare Command: Supported 00:11:42.490 Write Uncorrectable Command: Not Supported 00:11:42.490 Dataset Management Command: Supported 00:11:42.490 Write Zeroes Command: Supported 00:11:42.490 Set Features Save Field: Supported 00:11:42.490 Reservations: Not Supported 00:11:42.490 Timestamp: Supported 00:11:42.490 Copy: Supported 00:11:42.490 Volatile Write Cache: Present 00:11:42.490 Atomic Write Unit (Normal): 1 00:11:42.490 Atomic Write Unit (PFail): 1 00:11:42.490 Atomic Compare & Write Unit: 1 00:11:42.490 Fused Compare & Write: Not Supported 00:11:42.490 Scatter-Gather List 00:11:42.490 SGL Command Set: Supported 00:11:42.490 SGL Keyed: Not Supported 00:11:42.490 SGL Bit Bucket Descriptor: Not Supported 00:11:42.490 SGL Metadata Pointer: Not Supported 00:11:42.490 Oversized SGL: Not Supported 00:11:42.490 SGL Metadata Address: Not Supported 00:11:42.490 SGL Offset: Not Supported 00:11:42.490 Transport SGL Data Block: Not Supported 00:11:42.490 Replay Protected Memory Block: Not Supported 00:11:42.490 00:11:42.490 Firmware Slot Information 00:11:42.490 ========================= 00:11:42.490 Active slot: 1 00:11:42.490 Slot 1 Firmware Revision: 1.0 00:11:42.490 00:11:42.490 00:11:42.490 Commands Supported and Effects 00:11:42.490 ============================== 00:11:42.490 Admin Commands 00:11:42.490 -------------- 00:11:42.490 Delete I/O Submission Queue (00h): Supported 00:11:42.490 Create I/O Submission Queue (01h): Supported 00:11:42.490 Get Log Page (02h): Supported 00:11:42.490 Delete I/O Completion Queue (04h): Supported 00:11:42.490 Create I/O Completion Queue (05h): Supported 00:11:42.490 Identify (06h): Supported 00:11:42.490 Abort (08h): Supported 00:11:42.490 Set Features (09h): Supported 00:11:42.490 Get Features (0Ah): Supported 00:11:42.490 Asynchronous Event Request (0Ch): Supported 00:11:42.490 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:42.490 Directive Send (19h): Supported 00:11:42.490 Directive Receive (1Ah): Supported 00:11:42.490 Virtualization Management (1Ch): Supported 00:11:42.490 Doorbell Buffer Config (7Ch): Supported 00:11:42.490 Format NVM (80h): Supported LBA-Change 00:11:42.490 I/O Commands 00:11:42.490 ------------ 00:11:42.490 Flush (00h): Supported LBA-Change 00:11:42.490 Write (01h): Supported LBA-Change 00:11:42.490 Read (02h): Supported 00:11:42.490 Compare (05h): Supported 00:11:42.490 Write Zeroes (08h): Supported LBA-Change 00:11:42.490 Dataset Management (09h): Supported LBA-Change 00:11:42.490 Unknown (0Ch): Supported 00:11:42.490 Unknown (12h): Supported 00:11:42.490 Copy (19h): Supported LBA-Change 00:11:42.490 Unknown (1Dh): Supported LBA-Change 00:11:42.490 00:11:42.490 Error Log 00:11:42.490 ========= 00:11:42.490 00:11:42.490 Arbitration 00:11:42.490 =========== 00:11:42.490 Arbitration Burst: no limit 00:11:42.490 00:11:42.490 Power Management 00:11:42.490 ================ 00:11:42.490 Number of Power States: 1 00:11:42.490 Current Power State: Power State #0 00:11:42.490 Power State #0: 00:11:42.490 Max Power: 25.00 W 00:11:42.490 Non-Operational State: Operational 00:11:42.490 Entry Latency: 16 microseconds 00:11:42.490 Exit Latency: 4 microseconds 00:11:42.490 Relative Read Throughput: 0 00:11:42.490 Relative Read Latency: 0 00:11:42.490 Relative Write Throughput: 0 00:11:42.490 Relative Write Latency: 0 00:11:42.490 Idle Power: Not Reported 00:11:42.490 Active Power: Not Reported 00:11:42.490 Non-Operational Permissive Mode: Not Supported 00:11:42.490 00:11:42.490 Health Information 00:11:42.490 ================== 00:11:42.490 Critical Warnings: 00:11:42.490 Available Spare Space: OK 00:11:42.490 Temperature: OK 00:11:42.490 Device Reliability: OK 00:11:42.490 Read Only: No 00:11:42.490 Volatile Memory Backup: OK 00:11:42.490 Current Temperature: 323 Kelvin (50 Celsius) 00:11:42.490 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:42.490 Available Spare: 0% 00:11:42.490 Available Spare Threshold: 0% 00:11:42.490 Life Percentage Used: 0% 00:11:42.490 Data Units Read: 798 00:11:42.490 Data Units Written: 727 00:11:42.490 Host Read Commands: 38050 00:11:42.490 Host Write Commands: 37473 00:11:42.490 Controller Busy Time: 0 minutes 00:11:42.490 Power Cycles: 0 00:11:42.490 Power On Hours: 0 hours 00:11:42.490 Unsafe Shutdowns: 0 00:11:42.490 Unrecoverable Media Errors: 0 00:11:42.490 Lifetime Error Log Entries: 0 00:11:42.490 Warning Temperature Time: 0 minutes 00:11:42.490 Critical Temperature Time: 0 minutes 00:11:42.490 00:11:42.490 Number of Queues 00:11:42.490 ================ 00:11:42.490 Number of I/O Submission Queues: 64 00:11:42.490 Number of I/O Completion Queues: 64 00:11:42.490 00:11:42.490 ZNS Specific Controller Data 00:11:42.490 ============================ 00:11:42.490 Zone Append Size Limit: 0 00:11:42.490 00:11:42.490 00:11:42.490 Active Namespaces 00:11:42.490 ================= 00:11:42.490 Namespace ID:1 00:11:42.490 Error Recovery Timeout: Unlimited 00:11:42.490 Command Set Identifier: NVM (00h) 00:11:42.490 Deallocate: Supported 00:11:42.490 Deallocated/Unwritten Error: Supported 00:11:42.490 Deallocated Read Value: All 0x00 00:11:42.490 Deallocate in Write Zeroes: Not Supported 00:11:42.490 Deallocated Guard Field: 0xFFFF 00:11:42.490 Flush: Supported 00:11:42.490 Reservation: Not Supported 00:11:42.490 Namespace Sharing Capabilities: Multiple Controllers 00:11:42.490 Size (in LBAs): 262144 (1GiB) 00:11:42.490 Capacity (in LBAs): 262144 (1GiB) 00:11:42.490 Utilization (in LBAs): 262144 (1GiB) 00:11:42.490 Thin Provisioning: Not Supported 00:11:42.490 Per-NS Atomic Units: No 00:11:42.490 Maximum Single Source Range Length: 128 00:11:42.490 Maximum Copy Length: 128 00:11:42.490 Maximum Source Range Count: 128 00:11:42.490 NGUID/EUI64 Never Reused: No 00:11:42.490 Namespace Write Protected: No 00:11:42.490 Endurance group ID: 1 00:11:42.490 Number of LBA Formats: 8 00:11:42.490 Current LBA Format: LBA Format #04 00:11:42.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.490 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.490 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.490 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.490 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.490 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.490 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.490 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.490 00:11:42.490 Get Feature FDP: 00:11:42.490 ================ 00:11:42.490 Enabled: Yes 00:11:42.490 FDP configuration index: 0 00:11:42.490 00:11:42.490 FDP configurations log page 00:11:42.490 =========================== 00:11:42.490 Number of FDP configurations: 1 00:11:42.490 Version: 0 00:11:42.490 Size: 112 00:11:42.490 FDP Configuration Descriptor: 0 00:11:42.490 Descriptor Size: 96 00:11:42.490 Reclaim Group Identifier format: 2 00:11:42.490 FDP Volatile Write Cache: Not Present 00:11:42.490 FDP Configuration: Valid 00:11:42.490 Vendor Specific Size: 0 00:11:42.490 Number of Reclaim Groups: 2 00:11:42.490 Number of Recalim Unit Handles: 8 00:11:42.490 Max Placement Identifiers: 128 00:11:42.490 Number of Namespaces Suppprted: 256 00:11:42.490 Reclaim unit Nominal Size: 6000000 bytes 00:11:42.490 Estimated Reclaim Unit Time Limit: Not Reported 00:11:42.490 RUH Desc #000: RUH Type: Initially Isolated 00:11:42.490 RUH Desc #001: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #002: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #003: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #004: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #005: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #006: RUH Type: Initially Isolated 00:11:42.491 RUH Desc #007: RUH Type: Initially Isolated 00:11:42.491 00:11:42.491 FDP reclaim unit handle usage log page 00:11:42.491 ====================================== 00:11:42.491 Number of Reclaim Unit Handles: 8 00:11:42.491 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:42.491 RUH Usage Desc #001: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #002: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #003: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #004: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #005: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #006: RUH Attributes: Unused 00:11:42.491 RUH Usage Desc #007: RUH Attributes: Unused 00:11:42.491 00:11:42.491 FDP statistics log page 00:11:42.491 ======================= 00:11:42.491 Host bytes with metadata written: 439525376 00:11:42.491 Media bytes with metadata written: 439578624 00:11:42.491 Media bytes erased: 0 00:11:42.491 00:11:42.491 FDP events log page 00:11:42.491 =================== 00:11:42.491 Number of FDP events: 0 00:11:42.491 00:11:42.491 NVM Specific Namespace Data 00:11:42.491 =========================== 00:11:42.491 Logical Block Storage Tag Mask: 0 00:11:42.491 Protection Information Capabilities: 00:11:42.491 16b Guard Protection Information Storage Tag Support: No 00:11:42.491 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.491 Storage Tag Check Read Support: No 00:11:42.491 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.491 ===================================================== 00:11:42.491 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:42.491 ===================================================== 00:11:42.491 Controller Capabilities/Features 00:11:42.491 ================================ 00:11:42.491 Vendor ID: 1b36 00:11:42.491 Subsystem Vendor ID: 1af4 00:11:42.491 Serial Number: 12340 00:11:42.491 Model Number: QEMU NVMe Ctrl 00:11:42.491 Firmware Version: 8.0.0 00:11:42.491 Recommended Arb Burst: 6 00:11:42.491 IEEE OUI Identifier: 00 54 52 00:11:42.491 Multi-path I/O 00:11:42.491 May have multiple subsystem ports: No 00:11:42.491 May have multiple controllers: No 00:11:42.491 Associated with SR-IOV VF: No 00:11:42.491 Max Data Transfer Size: 524288 00:11:42.491 Max Number of Namespaces: 256 00:11:42.491 Max Number of I/O Queues: 64 00:11:42.491 NVMe Specification Version (VS): 1.4 00:11:42.491 NVMe Specification Version (Identify): 1.4 00:11:42.491 Maximum Queue Entries: 2048 00:11:42.491 Contiguous Queues Required: Yes 00:11:42.491 Arbitration Mechanisms Supported 00:11:42.491 Weighted Round Robin: Not Supported 00:11:42.491 Vendor Specific: Not Supported 00:11:42.491 Reset Timeout: 7500 ms 00:11:42.491 Doorbell Stride: 4 bytes 00:11:42.491 NVM Subsystem Reset: Not Supported 00:11:42.491 Command Sets Supported 00:11:42.491 NVM Command Set: Supported 00:11:42.491 Boot Partition: Not Supported 00:11:42.491 Memory Page Size Minimum: 4096 bytes 00:11:42.491 Memory Page Size Maximum: 65536 bytes 00:11:42.491 Persistent Memory Region: Not Supported 00:11:42.491 Optional Asynchronous Events Supported 00:11:42.491 Namespace Attribute Notices: Supported 00:11:42.491 Firmware Activation Notices: Not Supported 00:11:42.491 ANA Change Notices: Not Supported 00:11:42.491 PLE Aggregate Log Change Notices: Not Supported 00:11:42.491 LBA Status Info Alert Notices: Not Supported 00:11:42.491 EGE Aggregate Log Change Notices: Not Supported 00:11:42.491 Normal NVM Subsystem Shutdown event: Not Supported 00:11:42.491 Zone Descriptor Change Notices: Not Supported 00:11:42.491 Discovery Log Change Notices: Not Supported 00:11:42.491 Controller Attributes 00:11:42.491 128-bit Host Identifier: Not Supported 00:11:42.491 Non-Operational Permissive Mode: Not Supported 00:11:42.491 NVM Sets: Not Supported 00:11:42.491 Read Recovery Levels: Not Supported 00:11:42.491 Endurance Groups: Not Supported 00:11:42.491 Predictable Latency Mode: Not Supported 00:11:42.491 Traffic Based Keep ALive: Not Supported 00:11:42.491 Namespace Granularity: Not Supported 00:11:42.491 SQ Associations: Not Supported 00:11:42.491 UUID List: Not Supported 00:11:42.491 Multi-Domain Subsystem: Not Supported 00:11:42.491 Fixed Capacity Management: Not Supported 00:11:42.491 Variable Capacity Management: Not Supported 00:11:42.491 Delete Endurance Group: Not Supported 00:11:42.491 Delete NVM Set: Not Supported 00:11:42.491 Extended LBA Formats Supported: Supported 00:11:42.491 Flexible Data Placement Supported: Not Supported 00:11:42.491 00:11:42.491 Controller Memory Buffer Support 00:11:42.491 ================================ 00:11:42.491 Supported: No 00:11:42.491 00:11:42.491 Persistent Memory Region Support 00:11:42.491 ================================ 00:11:42.491 Supported: No 00:11:42.491 00:11:42.491 Admin Command Set Attributes 00:11:42.491 ============================ 00:11:42.491 Security Send/Receive: Not Supported 00:11:42.491 Format NVM: Supported 00:11:42.491 Firmware Activate/Download: Not Supported 00:11:42.491 Namespace Management: Supported 00:11:42.491 Device Self-Test: Not Supported 00:11:42.491 Directives: Supported 00:11:42.491 NVMe-MI: Not Supported 00:11:42.491 Virtualization Management: Not Supported 00:11:42.491 Doorbell Buffer Config: Supported 00:11:42.491 Get LBA Status Capability: Not Supported 00:11:42.491 Command & Feature Lockdown Capability: Not Supported 00:11:42.491 Abort Command Limit: 4 00:11:42.491 Async Event Request Limit: 4 00:11:42.491 Number of Firmware Slots: N/A 00:11:42.491 Firmware Slot 1 Read-Only: N/A 00:11:42.491 Firmware Activation Without Reset: N/A 00:11:42.491 Multiple Update Detection Support: N/A 00:11:42.491 Firmware Update Granularity: No Information Provided 00:11:42.491 Per-Namespace SMART Log: Yes 00:11:42.491 Asymmetric Namespace Access Log Page: Not Supported 00:11:42.491 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:42.491 Command Effects Log Page: Supported 00:11:42.491 Get Log Page Extended Data: Supported 00:11:42.491 Telemetry Log Pages: Not Supported 00:11:42.491 Persistent Event Log Pages: Not Supported 00:11:42.491 Supported Log Pages Log Page: May Support 00:11:42.491 Commands Supported & Effects Log Page: Not Supported 00:11:42.491 Feature Identifiers & Effects Log Page:May Support 00:11:42.491 NVMe-MI Commands & Effects Log Page: May Support 00:11:42.491 Data Area 4 for Telemetry Log: Not Supported 00:11:42.491 Error Log Page Entries Supported: 1 00:11:42.491 Keep Alive: Not Supported 00:11:42.491 00:11:42.491 NVM Command Set Attributes 00:11:42.491 ========================== 00:11:42.491 Submission Queue Entry Size 00:11:42.491 Max: 64 00:11:42.491 Min: 64 00:11:42.491 Completion Queue Entry Size 00:11:42.491 Max: 16 00:11:42.491 Min: 16 00:11:42.491 Number of Namespaces: 256 00:11:42.491 Compare Command: Supported 00:11:42.491 Write Uncorrectable Command: Not Supported 00:11:42.491 Dataset Management Command: Supported 00:11:42.491 Write Zeroes Command: Supported 00:11:42.491 Set Features Save Field: Supported 00:11:42.491 Reservations: Not Supported 00:11:42.491 Timestamp: Supported 00:11:42.491 Copy: Supported 00:11:42.491 Volatile Write Cache: Present 00:11:42.491 Atomic Write Unit (Normal): 1 00:11:42.491 Atomic Write Unit (PFail): 1 00:11:42.491 Atomic Compare & Write Unit: 1 00:11:42.491 Fused Compare & Write: Not Supported 00:11:42.491 Scatter-Gather List 00:11:42.491 SGL Command Set: Supported 00:11:42.491 SGL Keyed: Not Supported 00:11:42.491 SGL Bit Bucket Descriptor: Not Supported 00:11:42.491 SGL Metadata Pointer: Not Supported 00:11:42.491 Oversized SGL: Not Supported 00:11:42.491 SGL Metadata Address: Not Supported 00:11:42.491 SGL Offset: Not Supported 00:11:42.491 Transport SGL Data Block: Not Supported 00:11:42.491 Replay Protected Memory Block: Not Supported 00:11:42.491 00:11:42.491 Firmware Slot Information 00:11:42.491 ========================= 00:11:42.491 Active slot: 1 00:11:42.491 Slot 1 Firmware Revision: 1.0 00:11:42.491 00:11:42.491 00:11:42.491 Commands Supported and Effects 00:11:42.491 ============================== 00:11:42.491 Admin Commands 00:11:42.491 -------------- 00:11:42.491 Delete I/O Submission Queue (00h): Supported 00:11:42.492 Create I/O Submission Queue (01h): Supported 00:11:42.492 Get Log Page (02h): Supported 00:11:42.492 Delete I/O Completion Queue (04h): Supported 00:11:42.492 Create I/O Completion Queue (05h): Supported 00:11:42.492 Identify (06h): Supported 00:11:42.492 Abort (08h): Supported 00:11:42.492 Set Features (09h): Supported 00:11:42.492 Get Features (0Ah): Supported 00:11:42.492 Asynchronous Event Request (0Ch): Supported 00:11:42.492 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:42.492 Directive Send (19h): Supported 00:11:42.492 Directive Receive (1Ah): Supported 00:11:42.492 Virtualization Management (1Ch): Supported 00:11:42.492 Doorbell Buffer Config (7Ch): Supported 00:11:42.492 Format NVM (80h): Supported LBA-Change 00:11:42.492 I/O Commands 00:11:42.492 ------------ 00:11:42.492 Flush (00h): Supported LBA-Change 00:11:42.492 Write (01h): Supported LBA-Change 00:11:42.492 Read (02h): Supported 00:11:42.492 Compare (05h): Supported 00:11:42.492 Write Zeroes (08h): Supported LBA-Change 00:11:42.492 Dataset Management (09h): Supported LBA-Change 00:11:42.492 Unknown (0Ch): Supported 00:11:42.492 Unknown (12h): Supported 00:11:42.492 Copy (19h): Supported LBA-Change 00:11:42.492 Unknown (1Dh): Supported LBA-Change 00:11:42.492 00:11:42.492 Error Log 00:11:42.492 ========= 00:11:42.492 00:11:42.492 Arbitration 00:11:42.492 =========== 00:11:42.492 Arbitration Burst: no limit 00:11:42.492 00:11:42.492 Power Management 00:11:42.492 ================ 00:11:42.492 Number of Power States: 1 00:11:42.492 Current Power State: Power State #0 00:11:42.492 Power State #0: 00:11:42.492 Max Power: 25.00 W 00:11:42.492 Non-Operational State: Operational 00:11:42.492 Entry Latency: 16 microseconds 00:11:42.492 Exit Latency: 4 microseconds 00:11:42.492 Relative Read Throughput: 0 00:11:42.492 Relative Read Latency: 0 00:11:42.492 Relative Write Throughput: 0 00:11:42.492 Relative Write Latency: 0 00:11:42.492 Idle Power: Not Reported 00:11:42.492 Active Power: Not Reported 00:11:42.492 Non-Operational Permissive Mode: Not Supported 00:11:42.492 00:11:42.492 Health Information 00:11:42.492 ================== 00:11:42.492 Critical Warnings: 00:11:42.492 Available Spare Space: OK 00:11:42.492 Temperature: OK 00:11:42.492 Device Reliability: OK 00:11:42.492 Read Only: No 00:11:42.492 Volatile Memory Backup: OK 00:11:42.492 Current Temperature: 323 Kelvin (50 Celsius) 00:11:42.492 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:42.492 Available Spare: 0% 00:11:42.492 Available Spare Threshold: 0% 00:11:42.492 Life Percentage Used: 0% 00:11:42.492 Data Units Read: 677 00:11:42.492 Data Units Written: 605 00:11:42.492 Host Read Commands: 36685 00:11:42.492 Host Write Commands: 36471 00:11:42.492 Controller Busy Time: 0 minutes 00:11:42.492 Power Cycles: 0 00:11:42.492 Power On Hours: 0 hours 00:11:42.492 Unsafe Shutdowns: 0 00:11:42.492 Unrecoverable Media Errors: 0 00:11:42.492 Lifetime Error Log Entries: 0 00:11:42.492 Warning Temperature Time: 0 minutes 00:11:42.492 Critical Temperature Time: 0 minutes 00:11:42.492 00:11:42.492 Number of Queues 00:11:42.492 ================ 00:11:42.492 Number of I/O Submission Queues: 64 00:11:42.492 Number of I/O Completion Queues: 64 00:11:42.492 00:11:42.492 ZNS Specific Controller Data 00:11:42.492 ============================ 00:11:42.492 Zone Append Size Limit: 0 00:11:42.492 00:11:42.492 00:11:42.492 Active Namespaces 00:11:42.492 ================= 00:11:42.492 Namespace ID:1 00:11:42.492 Error Recovery Timeout: Unlimited 00:11:42.492 Command Set Identifier: NVM (00h) 00:11:42.492 Deallocate: Supported 00:11:42.492 Deallocated/Unwritten Error: Supported 00:11:42.492 Deallocated Read Value: All 0x00 00:11:42.492 Deallocate in Write Zeroes: Not Supported 00:11:42.492 Deallocated Guard Field: 0xFFFF 00:11:42.492 Flush: Supported 00:11:42.492 Reservation: Not Supported 00:11:42.492 Metadata Transferred as: Separate Metadata Buffer 00:11:42.492 Namespace Sharing Capabilities: Private 00:11:42.492 Size (in LBAs): 1548666 (5GiB) 00:11:42.492 Capacity (in LBAs): 1548666 (5GiB) 00:11:42.492 Utilization (in LBAs): 1548666 (5GiB) 00:11:42.492 Thin Provisioning: Not Supported 00:11:42.492 Per-NS Atomic Units: No 00:11:42.492 Maximum Single Source Range Length: 128 00:11:42.492 Maximum Copy Length: 128 00:11:42.492 Maximum Source Range Count: 128 00:11:42.492 NGUID/EUI64 Never Reused: No 00:11:42.492 Namespace Write Protected: No 00:11:42.492 Number of LBA Formats: 8 00:11:42.492 Current LBA Format: LBA Format #07 00:11:42.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.492 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.492 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.492 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.492 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.492 LBA Forma[2024-11-20 13:31:41.815109] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63089 terminated unexpected 00:11:42.492 [2024-11-20 13:31:41.815616] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63089 terminated unexpected 00:11:42.492 t #05: Data Size: 4096 Metadata Size: 8 00:11:42.492 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.492 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.492 00:11:42.492 NVM Specific Namespace Data 00:11:42.492 =========================== 00:11:42.492 Logical Block Storage Tag Mask: 0 00:11:42.492 Protection Information Capabilities: 00:11:42.492 16b Guard Protection Information Storage Tag Support: No 00:11:42.492 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.492 Storage Tag Check Read Support: No 00:11:42.492 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.492 ===================================================== 00:11:42.492 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:42.492 ===================================================== 00:11:42.492 Controller Capabilities/Features 00:11:42.492 ================================ 00:11:42.492 Vendor ID: 1b36 00:11:42.492 Subsystem Vendor ID: 1af4 00:11:42.492 Serial Number: 12342 00:11:42.492 Model Number: QEMU NVMe Ctrl 00:11:42.492 Firmware Version: 8.0.0 00:11:42.492 Recommended Arb Burst: 6 00:11:42.492 IEEE OUI Identifier: 00 54 52 00:11:42.492 Multi-path I/O 00:11:42.492 May have multiple subsystem ports: No 00:11:42.492 May have multiple controllers: No 00:11:42.492 Associated with SR-IOV VF: No 00:11:42.492 Max Data Transfer Size: 524288 00:11:42.492 Max Number of Namespaces: 256 00:11:42.492 Max Number of I/O Queues: 64 00:11:42.492 NVMe Specification Version (VS): 1.4 00:11:42.492 NVMe Specification Version (Identify): 1.4 00:11:42.492 Maximum Queue Entries: 2048 00:11:42.492 Contiguous Queues Required: Yes 00:11:42.492 Arbitration Mechanisms Supported 00:11:42.492 Weighted Round Robin: Not Supported 00:11:42.492 Vendor Specific: Not Supported 00:11:42.492 Reset Timeout: 7500 ms 00:11:42.492 Doorbell Stride: 4 bytes 00:11:42.493 NVM Subsystem Reset: Not Supported 00:11:42.493 Command Sets Supported 00:11:42.493 NVM Command Set: Supported 00:11:42.493 Boot Partition: Not Supported 00:11:42.493 Memory Page Size Minimum: 4096 bytes 00:11:42.493 Memory Page Size Maximum: 65536 bytes 00:11:42.493 Persistent Memory Region: Not Supported 00:11:42.493 Optional Asynchronous Events Supported 00:11:42.493 Namespace Attribute Notices: Supported 00:11:42.493 Firmware Activation Notices: Not Supported 00:11:42.493 ANA Change Notices: Not Supported 00:11:42.493 PLE Aggregate Log Change Notices: Not Supported 00:11:42.493 LBA Status Info Alert Notices: Not Supported 00:11:42.493 EGE Aggregate Log Change Notices: Not Supported 00:11:42.493 Normal NVM Subsystem Shutdown event: Not Supported 00:11:42.493 Zone Descriptor Change Notices: Not Supported 00:11:42.493 Discovery Log Change Notices: Not Supported 00:11:42.493 Controller Attributes 00:11:42.493 128-bit Host Identifier: Not Supported 00:11:42.493 Non-Operational Permissive Mode: Not Supported 00:11:42.493 NVM Sets: Not Supported 00:11:42.493 Read Recovery Levels: Not Supported 00:11:42.493 Endurance Groups: Not Supported 00:11:42.493 Predictable Latency Mode: Not Supported 00:11:42.493 Traffic Based Keep ALive: Not Supported 00:11:42.493 Namespace Granularity: Not Supported 00:11:42.493 SQ Associations: Not Supported 00:11:42.493 UUID List: Not Supported 00:11:42.493 Multi-Domain Subsystem: Not Supported 00:11:42.493 Fixed Capacity Management: Not Supported 00:11:42.493 Variable Capacity Management: Not Supported 00:11:42.493 Delete Endurance Group: Not Supported 00:11:42.493 Delete NVM Set: Not Supported 00:11:42.493 Extended LBA Formats Supported: Supported 00:11:42.493 Flexible Data Placement Supported: Not Supported 00:11:42.493 00:11:42.493 Controller Memory Buffer Support 00:11:42.493 ================================ 00:11:42.493 Supported: No 00:11:42.493 00:11:42.493 Persistent Memory Region Support 00:11:42.493 ================================ 00:11:42.493 Supported: No 00:11:42.493 00:11:42.493 Admin Command Set Attributes 00:11:42.493 ============================ 00:11:42.493 Security Send/Receive: Not Supported 00:11:42.493 Format NVM: Supported 00:11:42.493 Firmware Activate/Download: Not Supported 00:11:42.493 Namespace Management: Supported 00:11:42.493 Device Self-Test: Not Supported 00:11:42.493 Directives: Supported 00:11:42.493 NVMe-MI: Not Supported 00:11:42.493 Virtualization Management: Not Supported 00:11:42.493 Doorbell Buffer Config: Supported 00:11:42.493 Get LBA Status Capability: Not Supported 00:11:42.493 Command & Feature Lockdown Capability: Not Supported 00:11:42.493 Abort Command Limit: 4 00:11:42.493 Async Event Request Limit: 4 00:11:42.493 Number of Firmware Slots: N/A 00:11:42.493 Firmware Slot 1 Read-Only: N/A 00:11:42.493 Firmware Activation Without Reset: N/A 00:11:42.493 Multiple Update Detection Support: N/A 00:11:42.493 Firmware Update Granularity: No Information Provided 00:11:42.493 Per-Namespace SMART Log: Yes 00:11:42.493 Asymmetric Namespace Access Log Page: Not Supported 00:11:42.493 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:42.493 Command Effects Log Page: Supported 00:11:42.493 Get Log Page Extended Data: Supported 00:11:42.493 Telemetry Log Pages: Not Supported 00:11:42.493 Persistent Event Log Pages: Not Supported 00:11:42.493 Supported Log Pages Log Page: May Support 00:11:42.493 Commands Supported & Effects Log Page: Not Supported 00:11:42.493 Feature Identifiers & Effects Log Page:May Support 00:11:42.493 NVMe-MI Commands & Effects Log Page: May Support 00:11:42.493 Data Area 4 for Telemetry Log: Not Supported 00:11:42.493 Error Log Page Entries Supported: 1 00:11:42.493 Keep Alive: Not Supported 00:11:42.493 00:11:42.493 NVM Command Set Attributes 00:11:42.493 ========================== 00:11:42.493 Submission Queue Entry Size 00:11:42.493 Max: 64 00:11:42.493 Min: 64 00:11:42.493 Completion Queue Entry Size 00:11:42.493 Max: 16 00:11:42.493 Min: 16 00:11:42.493 Number of Namespaces: 256 00:11:42.493 Compare Command: Supported 00:11:42.493 Write Uncorrectable Command: Not Supported 00:11:42.493 Dataset Management Command: Supported 00:11:42.493 Write Zeroes Command: Supported 00:11:42.493 Set Features Save Field: Supported 00:11:42.493 Reservations: Not Supported 00:11:42.493 Timestamp: Supported 00:11:42.493 Copy: Supported 00:11:42.493 Volatile Write Cache: Present 00:11:42.493 Atomic Write Unit (Normal): 1 00:11:42.493 Atomic Write Unit (PFail): 1 00:11:42.493 Atomic Compare & Write Unit: 1 00:11:42.493 Fused Compare & Write: Not Supported 00:11:42.493 Scatter-Gather List 00:11:42.493 SGL Command Set: Supported 00:11:42.493 SGL Keyed: Not Supported 00:11:42.493 SGL Bit Bucket Descriptor: Not Supported 00:11:42.493 SGL Metadata Pointer: Not Supported 00:11:42.493 Oversized SGL: Not Supported 00:11:42.493 SGL Metadata Address: Not Supported 00:11:42.493 SGL Offset: Not Supported 00:11:42.493 Transport SGL Data Block: Not Supported 00:11:42.493 Replay Protected Memory Block: Not Supported 00:11:42.493 00:11:42.493 Firmware Slot Information 00:11:42.493 ========================= 00:11:42.493 Active slot: 1 00:11:42.493 Slot 1 Firmware Revision: 1.0 00:11:42.493 00:11:42.493 00:11:42.493 Commands Supported and Effects 00:11:42.493 ============================== 00:11:42.493 Admin Commands 00:11:42.493 -------------- 00:11:42.493 Delete I/O Submission Queue (00h): Supported 00:11:42.493 Create I/O Submission Queue (01h): Supported 00:11:42.493 Get Log Page (02h): Supported 00:11:42.493 Delete I/O Completion Queue (04h): Supported 00:11:42.493 Create I/O Completion Queue (05h): Supported 00:11:42.493 Identify (06h): Supported 00:11:42.493 Abort (08h): Supported 00:11:42.493 Set Features (09h): Supported 00:11:42.493 Get Features (0Ah): Supported 00:11:42.493 Asynchronous Event Request (0Ch): Supported 00:11:42.493 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:42.493 Directive Send (19h): Supported 00:11:42.493 Directive Receive (1Ah): Supported 00:11:42.493 Virtualization Management (1Ch): Supported 00:11:42.493 Doorbell Buffer Config (7Ch): Supported 00:11:42.493 Format NVM (80h): Supported LBA-Change 00:11:42.493 I/O Commands 00:11:42.493 ------------ 00:11:42.493 Flush (00h): Supported LBA-Change 00:11:42.493 Write (01h): Supported LBA-Change 00:11:42.493 Read (02h): Supported 00:11:42.493 Compare (05h): Supported 00:11:42.493 Write Zeroes (08h): Supported LBA-Change 00:11:42.493 Dataset Management (09h): Supported LBA-Change 00:11:42.493 Unknown (0Ch): Supported 00:11:42.493 Unknown (12h): Supported 00:11:42.493 Copy (19h): Supported LBA-Change 00:11:42.493 Unknown (1Dh): Supported LBA-Change 00:11:42.493 00:11:42.493 Error Log 00:11:42.493 ========= 00:11:42.493 00:11:42.493 Arbitration 00:11:42.493 =========== 00:11:42.493 Arbitration Burst: no limit 00:11:42.493 00:11:42.493 Power Management 00:11:42.493 ================ 00:11:42.493 Number of Power States: 1 00:11:42.493 Current Power State: Power State #0 00:11:42.493 Power State #0: 00:11:42.493 Max Power: 25.00 W 00:11:42.493 Non-Operational State: Operational 00:11:42.493 Entry Latency: 16 microseconds 00:11:42.493 Exit Latency: 4 microseconds 00:11:42.493 Relative Read Throughput: 0 00:11:42.493 Relative Read Latency: 0 00:11:42.493 Relative Write Throughput: 0 00:11:42.493 Relative Write Latency: 0 00:11:42.493 Idle Power: Not Reported 00:11:42.493 Active Power: Not Reported 00:11:42.493 Non-Operational Permissive Mode: Not Supported 00:11:42.493 00:11:42.493 Health Information 00:11:42.493 ================== 00:11:42.493 Critical Warnings: 00:11:42.493 Available Spare Space: OK 00:11:42.493 Temperature: OK 00:11:42.493 Device Reliability: OK 00:11:42.493 Read Only: No 00:11:42.493 Volatile Memory Backup: OK 00:11:42.493 Current Temperature: 323 Kelvin (50 Celsius) 00:11:42.493 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:42.493 Available Spare: 0% 00:11:42.493 Available Spare Threshold: 0% 00:11:42.493 Life Percentage Used: 0% 00:11:42.493 Data Units Read: 2136 00:11:42.493 Data Units Written: 1924 00:11:42.493 Host Read Commands: 111491 00:11:42.493 Host Write Commands: 109760 00:11:42.493 Controller Busy Time: 0 minutes 00:11:42.493 Power Cycles: 0 00:11:42.493 Power On Hours: 0 hours 00:11:42.493 Unsafe Shutdowns: 0 00:11:42.493 Unrecoverable Media Errors: 0 00:11:42.494 Lifetime Error Log Entries: 0 00:11:42.494 Warning Temperature Time: 0 minutes 00:11:42.494 Critical Temperature Time: 0 minutes 00:11:42.494 00:11:42.494 Number of Queues 00:11:42.494 ================ 00:11:42.494 Number of I/O Submission Queues: 64 00:11:42.494 Number of I/O Completion Queues: 64 00:11:42.494 00:11:42.494 ZNS Specific Controller Data 00:11:42.494 ============================ 00:11:42.494 Zone Append Size Limit: 0 00:11:42.494 00:11:42.494 00:11:42.494 Active Namespaces 00:11:42.494 ================= 00:11:42.494 Namespace ID:1 00:11:42.494 Error Recovery Timeout: Unlimited 00:11:42.494 Command Set Identifier: NVM (00h) 00:11:42.494 Deallocate: Supported 00:11:42.494 Deallocated/Unwritten Error: Supported 00:11:42.494 Deallocated Read Value: All 0x00 00:11:42.494 Deallocate in Write Zeroes: Not Supported 00:11:42.494 Deallocated Guard Field: 0xFFFF 00:11:42.494 Flush: Supported 00:11:42.494 Reservation: Not Supported 00:11:42.494 Namespace Sharing Capabilities: Private 00:11:42.494 Size (in LBAs): 1048576 (4GiB) 00:11:42.494 Capacity (in LBAs): 1048576 (4GiB) 00:11:42.494 Utilization (in LBAs): 1048576 (4GiB) 00:11:42.494 Thin Provisioning: Not Supported 00:11:42.494 Per-NS Atomic Units: No 00:11:42.494 Maximum Single Source Range Length: 128 00:11:42.494 Maximum Copy Length: 128 00:11:42.494 Maximum Source Range Count: 128 00:11:42.494 NGUID/EUI64 Never Reused: No 00:11:42.494 Namespace Write Protected: No 00:11:42.494 Number of LBA Formats: 8 00:11:42.494 Current LBA Format: LBA Format #04 00:11:42.494 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.494 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.494 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.494 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.494 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.494 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.494 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.494 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.494 00:11:42.494 NVM Specific Namespace Data 00:11:42.494 =========================== 00:11:42.494 Logical Block Storage Tag Mask: 0 00:11:42.494 Protection Information Capabilities: 00:11:42.494 16b Guard Protection Information Storage Tag Support: No 00:11:42.494 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.494 Storage Tag Check Read Support: No 00:11:42.494 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Namespace ID:2 00:11:42.494 Error Recovery Timeout: Unlimited 00:11:42.494 Command Set Identifier: NVM (00h) 00:11:42.494 Deallocate: Supported 00:11:42.494 Deallocated/Unwritten Error: Supported 00:11:42.494 Deallocated Read Value: All 0x00 00:11:42.494 Deallocate in Write Zeroes: Not Supported 00:11:42.494 Deallocated Guard Field: 0xFFFF 00:11:42.494 Flush: Supported 00:11:42.494 Reservation: Not Supported 00:11:42.494 Namespace Sharing Capabilities: Private 00:11:42.494 Size (in LBAs): 1048576 (4GiB) 00:11:42.494 Capacity (in LBAs): 1048576 (4GiB) 00:11:42.494 Utilization (in LBAs): 1048576 (4GiB) 00:11:42.494 Thin Provisioning: Not Supported 00:11:42.494 Per-NS Atomic Units: No 00:11:42.494 Maximum Single Source Range Length: 128 00:11:42.494 Maximum Copy Length: 128 00:11:42.494 Maximum Source Range Count: 128 00:11:42.494 NGUID/EUI64 Never Reused: No 00:11:42.494 Namespace Write Protected: No 00:11:42.494 Number of LBA Formats: 8 00:11:42.494 Current LBA Format: LBA Format #04 00:11:42.494 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.494 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.494 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.494 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.494 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.494 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.494 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.494 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.494 00:11:42.494 NVM Specific Namespace Data 00:11:42.494 =========================== 00:11:42.494 Logical Block Storage Tag Mask: 0 00:11:42.494 Protection Information Capabilities: 00:11:42.494 16b Guard Protection Information Storage Tag Support: No 00:11:42.494 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.494 Storage Tag Check Read Support: No 00:11:42.494 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Namespace ID:3 00:11:42.494 Error Recovery Timeout: Unlimited 00:11:42.494 Command Set Identifier: NVM (00h) 00:11:42.494 Deallocate: Supported 00:11:42.494 Deallocated/Unwritten Error: Supported 00:11:42.494 Deallocated Read Value: All 0x00 00:11:42.494 Deallocate in Write Zeroes: Not Supported 00:11:42.494 Deallocated Guard Field: 0xFFFF 00:11:42.494 Flush: Supported 00:11:42.494 Reservation: Not Supported 00:11:42.494 Namespace Sharing Capabilities: Private 00:11:42.494 Size (in LBAs): 1048576 (4GiB) 00:11:42.494 Capacity (in LBAs): 1048576 (4GiB) 00:11:42.494 Utilization (in LBAs): 1048576 (4GiB) 00:11:42.494 Thin Provisioning: Not Supported 00:11:42.494 Per-NS Atomic Units: No 00:11:42.494 Maximum Single Source Range Length: 128 00:11:42.494 Maximum Copy Length: 128 00:11:42.494 Maximum Source Range Count: 128 00:11:42.494 NGUID/EUI64 Never Reused: No 00:11:42.494 Namespace Write Protected: No 00:11:42.494 Number of LBA Formats: 8 00:11:42.494 Current LBA Format: LBA Format #04 00:11:42.494 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.494 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.494 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.494 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.494 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.494 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.494 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.494 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.494 00:11:42.494 NVM Specific Namespace Data 00:11:42.494 =========================== 00:11:42.494 Logical Block Storage Tag Mask: 0 00:11:42.494 Protection Information Capabilities: 00:11:42.494 16b Guard Protection Information Storage Tag Support: No 00:11:42.494 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.494 Storage Tag Check Read Support: No 00:11:42.494 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.494 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:42.494 13:31:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:42.756 ===================================================== 00:11:42.756 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:42.756 ===================================================== 00:11:42.756 Controller Capabilities/Features 00:11:42.756 ================================ 00:11:42.756 Vendor ID: 1b36 00:11:42.756 Subsystem Vendor ID: 1af4 00:11:42.756 Serial Number: 12340 00:11:42.756 Model Number: QEMU NVMe Ctrl 00:11:42.756 Firmware Version: 8.0.0 00:11:42.756 Recommended Arb Burst: 6 00:11:42.756 IEEE OUI Identifier: 00 54 52 00:11:42.756 Multi-path I/O 00:11:42.756 May have multiple subsystem ports: No 00:11:42.756 May have multiple controllers: No 00:11:42.756 Associated with SR-IOV VF: No 00:11:42.756 Max Data Transfer Size: 524288 00:11:42.756 Max Number of Namespaces: 256 00:11:42.756 Max Number of I/O Queues: 64 00:11:42.756 NVMe Specification Version (VS): 1.4 00:11:42.756 NVMe Specification Version (Identify): 1.4 00:11:42.756 Maximum Queue Entries: 2048 00:11:42.756 Contiguous Queues Required: Yes 00:11:42.756 Arbitration Mechanisms Supported 00:11:42.756 Weighted Round Robin: Not Supported 00:11:42.756 Vendor Specific: Not Supported 00:11:42.756 Reset Timeout: 7500 ms 00:11:42.756 Doorbell Stride: 4 bytes 00:11:42.756 NVM Subsystem Reset: Not Supported 00:11:42.756 Command Sets Supported 00:11:42.756 NVM Command Set: Supported 00:11:42.756 Boot Partition: Not Supported 00:11:42.756 Memory Page Size Minimum: 4096 bytes 00:11:42.756 Memory Page Size Maximum: 65536 bytes 00:11:42.756 Persistent Memory Region: Not Supported 00:11:42.756 Optional Asynchronous Events Supported 00:11:42.756 Namespace Attribute Notices: Supported 00:11:42.756 Firmware Activation Notices: Not Supported 00:11:42.756 ANA Change Notices: Not Supported 00:11:42.756 PLE Aggregate Log Change Notices: Not Supported 00:11:42.756 LBA Status Info Alert Notices: Not Supported 00:11:42.756 EGE Aggregate Log Change Notices: Not Supported 00:11:42.756 Normal NVM Subsystem Shutdown event: Not Supported 00:11:42.756 Zone Descriptor Change Notices: Not Supported 00:11:42.756 Discovery Log Change Notices: Not Supported 00:11:42.756 Controller Attributes 00:11:42.756 128-bit Host Identifier: Not Supported 00:11:42.756 Non-Operational Permissive Mode: Not Supported 00:11:42.756 NVM Sets: Not Supported 00:11:42.756 Read Recovery Levels: Not Supported 00:11:42.756 Endurance Groups: Not Supported 00:11:42.756 Predictable Latency Mode: Not Supported 00:11:42.756 Traffic Based Keep ALive: Not Supported 00:11:42.756 Namespace Granularity: Not Supported 00:11:42.756 SQ Associations: Not Supported 00:11:42.756 UUID List: Not Supported 00:11:42.756 Multi-Domain Subsystem: Not Supported 00:11:42.756 Fixed Capacity Management: Not Supported 00:11:42.756 Variable Capacity Management: Not Supported 00:11:42.756 Delete Endurance Group: Not Supported 00:11:42.756 Delete NVM Set: Not Supported 00:11:42.756 Extended LBA Formats Supported: Supported 00:11:42.756 Flexible Data Placement Supported: Not Supported 00:11:42.756 00:11:42.756 Controller Memory Buffer Support 00:11:42.756 ================================ 00:11:42.756 Supported: No 00:11:42.756 00:11:42.756 Persistent Memory Region Support 00:11:42.756 ================================ 00:11:42.756 Supported: No 00:11:42.756 00:11:42.756 Admin Command Set Attributes 00:11:42.756 ============================ 00:11:42.756 Security Send/Receive: Not Supported 00:11:42.756 Format NVM: Supported 00:11:42.756 Firmware Activate/Download: Not Supported 00:11:42.756 Namespace Management: Supported 00:11:42.756 Device Self-Test: Not Supported 00:11:42.756 Directives: Supported 00:11:42.756 NVMe-MI: Not Supported 00:11:42.756 Virtualization Management: Not Supported 00:11:42.756 Doorbell Buffer Config: Supported 00:11:42.756 Get LBA Status Capability: Not Supported 00:11:42.756 Command & Feature Lockdown Capability: Not Supported 00:11:42.756 Abort Command Limit: 4 00:11:42.756 Async Event Request Limit: 4 00:11:42.756 Number of Firmware Slots: N/A 00:11:42.756 Firmware Slot 1 Read-Only: N/A 00:11:42.756 Firmware Activation Without Reset: N/A 00:11:42.756 Multiple Update Detection Support: N/A 00:11:42.756 Firmware Update Granularity: No Information Provided 00:11:42.756 Per-Namespace SMART Log: Yes 00:11:42.756 Asymmetric Namespace Access Log Page: Not Supported 00:11:42.756 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:42.756 Command Effects Log Page: Supported 00:11:42.756 Get Log Page Extended Data: Supported 00:11:42.756 Telemetry Log Pages: Not Supported 00:11:42.756 Persistent Event Log Pages: Not Supported 00:11:42.756 Supported Log Pages Log Page: May Support 00:11:42.756 Commands Supported & Effects Log Page: Not Supported 00:11:42.756 Feature Identifiers & Effects Log Page:May Support 00:11:42.756 NVMe-MI Commands & Effects Log Page: May Support 00:11:42.756 Data Area 4 for Telemetry Log: Not Supported 00:11:42.756 Error Log Page Entries Supported: 1 00:11:42.756 Keep Alive: Not Supported 00:11:42.756 00:11:42.756 NVM Command Set Attributes 00:11:42.756 ========================== 00:11:42.756 Submission Queue Entry Size 00:11:42.756 Max: 64 00:11:42.756 Min: 64 00:11:42.756 Completion Queue Entry Size 00:11:42.756 Max: 16 00:11:42.756 Min: 16 00:11:42.756 Number of Namespaces: 256 00:11:42.756 Compare Command: Supported 00:11:42.756 Write Uncorrectable Command: Not Supported 00:11:42.756 Dataset Management Command: Supported 00:11:42.756 Write Zeroes Command: Supported 00:11:42.756 Set Features Save Field: Supported 00:11:42.756 Reservations: Not Supported 00:11:42.756 Timestamp: Supported 00:11:42.756 Copy: Supported 00:11:42.756 Volatile Write Cache: Present 00:11:42.756 Atomic Write Unit (Normal): 1 00:11:42.756 Atomic Write Unit (PFail): 1 00:11:42.756 Atomic Compare & Write Unit: 1 00:11:42.756 Fused Compare & Write: Not Supported 00:11:42.756 Scatter-Gather List 00:11:42.756 SGL Command Set: Supported 00:11:42.756 SGL Keyed: Not Supported 00:11:42.756 SGL Bit Bucket Descriptor: Not Supported 00:11:42.756 SGL Metadata Pointer: Not Supported 00:11:42.756 Oversized SGL: Not Supported 00:11:42.756 SGL Metadata Address: Not Supported 00:11:42.756 SGL Offset: Not Supported 00:11:42.756 Transport SGL Data Block: Not Supported 00:11:42.757 Replay Protected Memory Block: Not Supported 00:11:42.757 00:11:42.757 Firmware Slot Information 00:11:42.757 ========================= 00:11:42.757 Active slot: 1 00:11:42.757 Slot 1 Firmware Revision: 1.0 00:11:42.757 00:11:42.757 00:11:42.757 Commands Supported and Effects 00:11:42.757 ============================== 00:11:42.757 Admin Commands 00:11:42.757 -------------- 00:11:42.757 Delete I/O Submission Queue (00h): Supported 00:11:42.757 Create I/O Submission Queue (01h): Supported 00:11:42.757 Get Log Page (02h): Supported 00:11:42.757 Delete I/O Completion Queue (04h): Supported 00:11:42.757 Create I/O Completion Queue (05h): Supported 00:11:42.757 Identify (06h): Supported 00:11:42.757 Abort (08h): Supported 00:11:42.757 Set Features (09h): Supported 00:11:42.757 Get Features (0Ah): Supported 00:11:42.757 Asynchronous Event Request (0Ch): Supported 00:11:42.757 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:42.757 Directive Send (19h): Supported 00:11:42.757 Directive Receive (1Ah): Supported 00:11:42.757 Virtualization Management (1Ch): Supported 00:11:42.757 Doorbell Buffer Config (7Ch): Supported 00:11:42.757 Format NVM (80h): Supported LBA-Change 00:11:42.757 I/O Commands 00:11:42.757 ------------ 00:11:42.757 Flush (00h): Supported LBA-Change 00:11:42.757 Write (01h): Supported LBA-Change 00:11:42.757 Read (02h): Supported 00:11:42.757 Compare (05h): Supported 00:11:42.757 Write Zeroes (08h): Supported LBA-Change 00:11:42.757 Dataset Management (09h): Supported LBA-Change 00:11:42.757 Unknown (0Ch): Supported 00:11:42.757 Unknown (12h): Supported 00:11:42.757 Copy (19h): Supported LBA-Change 00:11:42.757 Unknown (1Dh): Supported LBA-Change 00:11:42.757 00:11:42.757 Error Log 00:11:42.757 ========= 00:11:42.757 00:11:42.757 Arbitration 00:11:42.757 =========== 00:11:42.757 Arbitration Burst: no limit 00:11:42.757 00:11:42.757 Power Management 00:11:42.757 ================ 00:11:42.757 Number of Power States: 1 00:11:42.757 Current Power State: Power State #0 00:11:42.757 Power State #0: 00:11:42.757 Max Power: 25.00 W 00:11:42.757 Non-Operational State: Operational 00:11:42.757 Entry Latency: 16 microseconds 00:11:42.757 Exit Latency: 4 microseconds 00:11:42.757 Relative Read Throughput: 0 00:11:42.757 Relative Read Latency: 0 00:11:42.757 Relative Write Throughput: 0 00:11:42.757 Relative Write Latency: 0 00:11:42.757 Idle Power: Not Reported 00:11:42.757 Active Power: Not Reported 00:11:42.757 Non-Operational Permissive Mode: Not Supported 00:11:42.757 00:11:42.757 Health Information 00:11:42.757 ================== 00:11:42.757 Critical Warnings: 00:11:42.757 Available Spare Space: OK 00:11:42.757 Temperature: OK 00:11:42.757 Device Reliability: OK 00:11:42.757 Read Only: No 00:11:42.757 Volatile Memory Backup: OK 00:11:42.757 Current Temperature: 323 Kelvin (50 Celsius) 00:11:42.757 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:42.757 Available Spare: 0% 00:11:42.757 Available Spare Threshold: 0% 00:11:42.757 Life Percentage Used: 0% 00:11:42.757 Data Units Read: 677 00:11:42.757 Data Units Written: 605 00:11:42.757 Host Read Commands: 36685 00:11:42.757 Host Write Commands: 36471 00:11:42.757 Controller Busy Time: 0 minutes 00:11:42.757 Power Cycles: 0 00:11:42.757 Power On Hours: 0 hours 00:11:42.757 Unsafe Shutdowns: 0 00:11:42.757 Unrecoverable Media Errors: 0 00:11:42.757 Lifetime Error Log Entries: 0 00:11:42.757 Warning Temperature Time: 0 minutes 00:11:42.757 Critical Temperature Time: 0 minutes 00:11:42.757 00:11:42.757 Number of Queues 00:11:42.757 ================ 00:11:42.757 Number of I/O Submission Queues: 64 00:11:42.757 Number of I/O Completion Queues: 64 00:11:42.757 00:11:42.757 ZNS Specific Controller Data 00:11:42.757 ============================ 00:11:42.757 Zone Append Size Limit: 0 00:11:42.757 00:11:42.757 00:11:42.757 Active Namespaces 00:11:42.757 ================= 00:11:42.757 Namespace ID:1 00:11:42.757 Error Recovery Timeout: Unlimited 00:11:42.757 Command Set Identifier: NVM (00h) 00:11:42.757 Deallocate: Supported 00:11:42.757 Deallocated/Unwritten Error: Supported 00:11:42.757 Deallocated Read Value: All 0x00 00:11:42.757 Deallocate in Write Zeroes: Not Supported 00:11:42.757 Deallocated Guard Field: 0xFFFF 00:11:42.757 Flush: Supported 00:11:42.757 Reservation: Not Supported 00:11:42.757 Metadata Transferred as: Separate Metadata Buffer 00:11:42.757 Namespace Sharing Capabilities: Private 00:11:42.757 Size (in LBAs): 1548666 (5GiB) 00:11:42.757 Capacity (in LBAs): 1548666 (5GiB) 00:11:42.757 Utilization (in LBAs): 1548666 (5GiB) 00:11:42.757 Thin Provisioning: Not Supported 00:11:42.757 Per-NS Atomic Units: No 00:11:42.757 Maximum Single Source Range Length: 128 00:11:42.757 Maximum Copy Length: 128 00:11:42.757 Maximum Source Range Count: 128 00:11:42.757 NGUID/EUI64 Never Reused: No 00:11:42.757 Namespace Write Protected: No 00:11:42.757 Number of LBA Formats: 8 00:11:42.757 Current LBA Format: LBA Format #07 00:11:42.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:42.757 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:42.757 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:42.757 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:42.757 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:42.757 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:42.757 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:42.757 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:42.757 00:11:42.757 NVM Specific Namespace Data 00:11:42.757 =========================== 00:11:42.757 Logical Block Storage Tag Mask: 0 00:11:42.757 Protection Information Capabilities: 00:11:42.757 16b Guard Protection Information Storage Tag Support: No 00:11:42.757 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:42.757 Storage Tag Check Read Support: No 00:11:42.757 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:42.757 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:42.757 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:43.020 ===================================================== 00:11:43.020 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:43.020 ===================================================== 00:11:43.020 Controller Capabilities/Features 00:11:43.020 ================================ 00:11:43.020 Vendor ID: 1b36 00:11:43.020 Subsystem Vendor ID: 1af4 00:11:43.020 Serial Number: 12341 00:11:43.020 Model Number: QEMU NVMe Ctrl 00:11:43.020 Firmware Version: 8.0.0 00:11:43.020 Recommended Arb Burst: 6 00:11:43.020 IEEE OUI Identifier: 00 54 52 00:11:43.020 Multi-path I/O 00:11:43.020 May have multiple subsystem ports: No 00:11:43.020 May have multiple controllers: No 00:11:43.020 Associated with SR-IOV VF: No 00:11:43.020 Max Data Transfer Size: 524288 00:11:43.020 Max Number of Namespaces: 256 00:11:43.020 Max Number of I/O Queues: 64 00:11:43.020 NVMe Specification Version (VS): 1.4 00:11:43.020 NVMe Specification Version (Identify): 1.4 00:11:43.020 Maximum Queue Entries: 2048 00:11:43.020 Contiguous Queues Required: Yes 00:11:43.020 Arbitration Mechanisms Supported 00:11:43.020 Weighted Round Robin: Not Supported 00:11:43.020 Vendor Specific: Not Supported 00:11:43.020 Reset Timeout: 7500 ms 00:11:43.020 Doorbell Stride: 4 bytes 00:11:43.020 NVM Subsystem Reset: Not Supported 00:11:43.020 Command Sets Supported 00:11:43.020 NVM Command Set: Supported 00:11:43.020 Boot Partition: Not Supported 00:11:43.020 Memory Page Size Minimum: 4096 bytes 00:11:43.020 Memory Page Size Maximum: 65536 bytes 00:11:43.020 Persistent Memory Region: Not Supported 00:11:43.020 Optional Asynchronous Events Supported 00:11:43.020 Namespace Attribute Notices: Supported 00:11:43.020 Firmware Activation Notices: Not Supported 00:11:43.020 ANA Change Notices: Not Supported 00:11:43.020 PLE Aggregate Log Change Notices: Not Supported 00:11:43.020 LBA Status Info Alert Notices: Not Supported 00:11:43.020 EGE Aggregate Log Change Notices: Not Supported 00:11:43.020 Normal NVM Subsystem Shutdown event: Not Supported 00:11:43.020 Zone Descriptor Change Notices: Not Supported 00:11:43.020 Discovery Log Change Notices: Not Supported 00:11:43.020 Controller Attributes 00:11:43.020 128-bit Host Identifier: Not Supported 00:11:43.020 Non-Operational Permissive Mode: Not Supported 00:11:43.020 NVM Sets: Not Supported 00:11:43.020 Read Recovery Levels: Not Supported 00:11:43.020 Endurance Groups: Not Supported 00:11:43.020 Predictable Latency Mode: Not Supported 00:11:43.020 Traffic Based Keep ALive: Not Supported 00:11:43.020 Namespace Granularity: Not Supported 00:11:43.020 SQ Associations: Not Supported 00:11:43.020 UUID List: Not Supported 00:11:43.020 Multi-Domain Subsystem: Not Supported 00:11:43.020 Fixed Capacity Management: Not Supported 00:11:43.020 Variable Capacity Management: Not Supported 00:11:43.020 Delete Endurance Group: Not Supported 00:11:43.020 Delete NVM Set: Not Supported 00:11:43.020 Extended LBA Formats Supported: Supported 00:11:43.020 Flexible Data Placement Supported: Not Supported 00:11:43.020 00:11:43.020 Controller Memory Buffer Support 00:11:43.020 ================================ 00:11:43.020 Supported: No 00:11:43.020 00:11:43.020 Persistent Memory Region Support 00:11:43.020 ================================ 00:11:43.020 Supported: No 00:11:43.020 00:11:43.020 Admin Command Set Attributes 00:11:43.020 ============================ 00:11:43.020 Security Send/Receive: Not Supported 00:11:43.020 Format NVM: Supported 00:11:43.020 Firmware Activate/Download: Not Supported 00:11:43.020 Namespace Management: Supported 00:11:43.020 Device Self-Test: Not Supported 00:11:43.020 Directives: Supported 00:11:43.020 NVMe-MI: Not Supported 00:11:43.020 Virtualization Management: Not Supported 00:11:43.020 Doorbell Buffer Config: Supported 00:11:43.020 Get LBA Status Capability: Not Supported 00:11:43.020 Command & Feature Lockdown Capability: Not Supported 00:11:43.020 Abort Command Limit: 4 00:11:43.020 Async Event Request Limit: 4 00:11:43.020 Number of Firmware Slots: N/A 00:11:43.020 Firmware Slot 1 Read-Only: N/A 00:11:43.020 Firmware Activation Without Reset: N/A 00:11:43.020 Multiple Update Detection Support: N/A 00:11:43.020 Firmware Update Granularity: No Information Provided 00:11:43.020 Per-Namespace SMART Log: Yes 00:11:43.020 Asymmetric Namespace Access Log Page: Not Supported 00:11:43.020 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:43.020 Command Effects Log Page: Supported 00:11:43.020 Get Log Page Extended Data: Supported 00:11:43.020 Telemetry Log Pages: Not Supported 00:11:43.020 Persistent Event Log Pages: Not Supported 00:11:43.020 Supported Log Pages Log Page: May Support 00:11:43.020 Commands Supported & Effects Log Page: Not Supported 00:11:43.020 Feature Identifiers & Effects Log Page:May Support 00:11:43.020 NVMe-MI Commands & Effects Log Page: May Support 00:11:43.020 Data Area 4 for Telemetry Log: Not Supported 00:11:43.020 Error Log Page Entries Supported: 1 00:11:43.020 Keep Alive: Not Supported 00:11:43.020 00:11:43.020 NVM Command Set Attributes 00:11:43.020 ========================== 00:11:43.020 Submission Queue Entry Size 00:11:43.020 Max: 64 00:11:43.020 Min: 64 00:11:43.020 Completion Queue Entry Size 00:11:43.020 Max: 16 00:11:43.020 Min: 16 00:11:43.020 Number of Namespaces: 256 00:11:43.020 Compare Command: Supported 00:11:43.020 Write Uncorrectable Command: Not Supported 00:11:43.020 Dataset Management Command: Supported 00:11:43.020 Write Zeroes Command: Supported 00:11:43.020 Set Features Save Field: Supported 00:11:43.020 Reservations: Not Supported 00:11:43.020 Timestamp: Supported 00:11:43.020 Copy: Supported 00:11:43.020 Volatile Write Cache: Present 00:11:43.020 Atomic Write Unit (Normal): 1 00:11:43.020 Atomic Write Unit (PFail): 1 00:11:43.020 Atomic Compare & Write Unit: 1 00:11:43.020 Fused Compare & Write: Not Supported 00:11:43.020 Scatter-Gather List 00:11:43.021 SGL Command Set: Supported 00:11:43.021 SGL Keyed: Not Supported 00:11:43.021 SGL Bit Bucket Descriptor: Not Supported 00:11:43.021 SGL Metadata Pointer: Not Supported 00:11:43.021 Oversized SGL: Not Supported 00:11:43.021 SGL Metadata Address: Not Supported 00:11:43.021 SGL Offset: Not Supported 00:11:43.021 Transport SGL Data Block: Not Supported 00:11:43.021 Replay Protected Memory Block: Not Supported 00:11:43.021 00:11:43.021 Firmware Slot Information 00:11:43.021 ========================= 00:11:43.021 Active slot: 1 00:11:43.021 Slot 1 Firmware Revision: 1.0 00:11:43.021 00:11:43.021 00:11:43.021 Commands Supported and Effects 00:11:43.021 ============================== 00:11:43.021 Admin Commands 00:11:43.021 -------------- 00:11:43.021 Delete I/O Submission Queue (00h): Supported 00:11:43.021 Create I/O Submission Queue (01h): Supported 00:11:43.021 Get Log Page (02h): Supported 00:11:43.021 Delete I/O Completion Queue (04h): Supported 00:11:43.021 Create I/O Completion Queue (05h): Supported 00:11:43.021 Identify (06h): Supported 00:11:43.021 Abort (08h): Supported 00:11:43.021 Set Features (09h): Supported 00:11:43.021 Get Features (0Ah): Supported 00:11:43.021 Asynchronous Event Request (0Ch): Supported 00:11:43.021 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:43.021 Directive Send (19h): Supported 00:11:43.021 Directive Receive (1Ah): Supported 00:11:43.021 Virtualization Management (1Ch): Supported 00:11:43.021 Doorbell Buffer Config (7Ch): Supported 00:11:43.021 Format NVM (80h): Supported LBA-Change 00:11:43.021 I/O Commands 00:11:43.021 ------------ 00:11:43.021 Flush (00h): Supported LBA-Change 00:11:43.021 Write (01h): Supported LBA-Change 00:11:43.021 Read (02h): Supported 00:11:43.021 Compare (05h): Supported 00:11:43.021 Write Zeroes (08h): Supported LBA-Change 00:11:43.021 Dataset Management (09h): Supported LBA-Change 00:11:43.021 Unknown (0Ch): Supported 00:11:43.021 Unknown (12h): Supported 00:11:43.021 Copy (19h): Supported LBA-Change 00:11:43.021 Unknown (1Dh): Supported LBA-Change 00:11:43.021 00:11:43.021 Error Log 00:11:43.021 ========= 00:11:43.021 00:11:43.021 Arbitration 00:11:43.021 =========== 00:11:43.021 Arbitration Burst: no limit 00:11:43.021 00:11:43.021 Power Management 00:11:43.021 ================ 00:11:43.021 Number of Power States: 1 00:11:43.021 Current Power State: Power State #0 00:11:43.021 Power State #0: 00:11:43.021 Max Power: 25.00 W 00:11:43.021 Non-Operational State: Operational 00:11:43.021 Entry Latency: 16 microseconds 00:11:43.021 Exit Latency: 4 microseconds 00:11:43.021 Relative Read Throughput: 0 00:11:43.021 Relative Read Latency: 0 00:11:43.021 Relative Write Throughput: 0 00:11:43.021 Relative Write Latency: 0 00:11:43.021 Idle Power: Not Reported 00:11:43.021 Active Power: Not Reported 00:11:43.021 Non-Operational Permissive Mode: Not Supported 00:11:43.021 00:11:43.021 Health Information 00:11:43.021 ================== 00:11:43.021 Critical Warnings: 00:11:43.021 Available Spare Space: OK 00:11:43.021 Temperature: OK 00:11:43.021 Device Reliability: OK 00:11:43.021 Read Only: No 00:11:43.021 Volatile Memory Backup: OK 00:11:43.021 Current Temperature: 323 Kelvin (50 Celsius) 00:11:43.021 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:43.021 Available Spare: 0% 00:11:43.021 Available Spare Threshold: 0% 00:11:43.021 Life Percentage Used: 0% 00:11:43.021 Data Units Read: 1001 00:11:43.021 Data Units Written: 869 00:11:43.021 Host Read Commands: 52178 00:11:43.021 Host Write Commands: 50956 00:11:43.021 Controller Busy Time: 0 minutes 00:11:43.021 Power Cycles: 0 00:11:43.021 Power On Hours: 0 hours 00:11:43.021 Unsafe Shutdowns: 0 00:11:43.021 Unrecoverable Media Errors: 0 00:11:43.021 Lifetime Error Log Entries: 0 00:11:43.021 Warning Temperature Time: 0 minutes 00:11:43.021 Critical Temperature Time: 0 minutes 00:11:43.021 00:11:43.021 Number of Queues 00:11:43.021 ================ 00:11:43.021 Number of I/O Submission Queues: 64 00:11:43.021 Number of I/O Completion Queues: 64 00:11:43.021 00:11:43.021 ZNS Specific Controller Data 00:11:43.021 ============================ 00:11:43.021 Zone Append Size Limit: 0 00:11:43.021 00:11:43.021 00:11:43.021 Active Namespaces 00:11:43.021 ================= 00:11:43.021 Namespace ID:1 00:11:43.021 Error Recovery Timeout: Unlimited 00:11:43.021 Command Set Identifier: NVM (00h) 00:11:43.021 Deallocate: Supported 00:11:43.021 Deallocated/Unwritten Error: Supported 00:11:43.021 Deallocated Read Value: All 0x00 00:11:43.021 Deallocate in Write Zeroes: Not Supported 00:11:43.021 Deallocated Guard Field: 0xFFFF 00:11:43.021 Flush: Supported 00:11:43.021 Reservation: Not Supported 00:11:43.021 Namespace Sharing Capabilities: Private 00:11:43.021 Size (in LBAs): 1310720 (5GiB) 00:11:43.021 Capacity (in LBAs): 1310720 (5GiB) 00:11:43.021 Utilization (in LBAs): 1310720 (5GiB) 00:11:43.021 Thin Provisioning: Not Supported 00:11:43.021 Per-NS Atomic Units: No 00:11:43.021 Maximum Single Source Range Length: 128 00:11:43.021 Maximum Copy Length: 128 00:11:43.021 Maximum Source Range Count: 128 00:11:43.021 NGUID/EUI64 Never Reused: No 00:11:43.021 Namespace Write Protected: No 00:11:43.021 Number of LBA Formats: 8 00:11:43.021 Current LBA Format: LBA Format #04 00:11:43.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.021 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:43.021 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:43.021 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:43.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:43.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:43.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:43.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:43.021 00:11:43.021 NVM Specific Namespace Data 00:11:43.021 =========================== 00:11:43.021 Logical Block Storage Tag Mask: 0 00:11:43.021 Protection Information Capabilities: 00:11:43.021 16b Guard Protection Information Storage Tag Support: No 00:11:43.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:43.021 Storage Tag Check Read Support: No 00:11:43.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.021 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:43.021 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:43.284 ===================================================== 00:11:43.284 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:43.284 ===================================================== 00:11:43.284 Controller Capabilities/Features 00:11:43.284 ================================ 00:11:43.284 Vendor ID: 1b36 00:11:43.284 Subsystem Vendor ID: 1af4 00:11:43.284 Serial Number: 12342 00:11:43.284 Model Number: QEMU NVMe Ctrl 00:11:43.284 Firmware Version: 8.0.0 00:11:43.284 Recommended Arb Burst: 6 00:11:43.284 IEEE OUI Identifier: 00 54 52 00:11:43.284 Multi-path I/O 00:11:43.284 May have multiple subsystem ports: No 00:11:43.284 May have multiple controllers: No 00:11:43.284 Associated with SR-IOV VF: No 00:11:43.284 Max Data Transfer Size: 524288 00:11:43.284 Max Number of Namespaces: 256 00:11:43.284 Max Number of I/O Queues: 64 00:11:43.284 NVMe Specification Version (VS): 1.4 00:11:43.284 NVMe Specification Version (Identify): 1.4 00:11:43.284 Maximum Queue Entries: 2048 00:11:43.284 Contiguous Queues Required: Yes 00:11:43.284 Arbitration Mechanisms Supported 00:11:43.284 Weighted Round Robin: Not Supported 00:11:43.284 Vendor Specific: Not Supported 00:11:43.284 Reset Timeout: 7500 ms 00:11:43.284 Doorbell Stride: 4 bytes 00:11:43.284 NVM Subsystem Reset: Not Supported 00:11:43.284 Command Sets Supported 00:11:43.284 NVM Command Set: Supported 00:11:43.284 Boot Partition: Not Supported 00:11:43.284 Memory Page Size Minimum: 4096 bytes 00:11:43.284 Memory Page Size Maximum: 65536 bytes 00:11:43.284 Persistent Memory Region: Not Supported 00:11:43.284 Optional Asynchronous Events Supported 00:11:43.284 Namespace Attribute Notices: Supported 00:11:43.284 Firmware Activation Notices: Not Supported 00:11:43.284 ANA Change Notices: Not Supported 00:11:43.284 PLE Aggregate Log Change Notices: Not Supported 00:11:43.284 LBA Status Info Alert Notices: Not Supported 00:11:43.284 EGE Aggregate Log Change Notices: Not Supported 00:11:43.284 Normal NVM Subsystem Shutdown event: Not Supported 00:11:43.284 Zone Descriptor Change Notices: Not Supported 00:11:43.284 Discovery Log Change Notices: Not Supported 00:11:43.284 Controller Attributes 00:11:43.284 128-bit Host Identifier: Not Supported 00:11:43.284 Non-Operational Permissive Mode: Not Supported 00:11:43.284 NVM Sets: Not Supported 00:11:43.284 Read Recovery Levels: Not Supported 00:11:43.284 Endurance Groups: Not Supported 00:11:43.284 Predictable Latency Mode: Not Supported 00:11:43.284 Traffic Based Keep ALive: Not Supported 00:11:43.284 Namespace Granularity: Not Supported 00:11:43.284 SQ Associations: Not Supported 00:11:43.284 UUID List: Not Supported 00:11:43.284 Multi-Domain Subsystem: Not Supported 00:11:43.284 Fixed Capacity Management: Not Supported 00:11:43.284 Variable Capacity Management: Not Supported 00:11:43.284 Delete Endurance Group: Not Supported 00:11:43.285 Delete NVM Set: Not Supported 00:11:43.285 Extended LBA Formats Supported: Supported 00:11:43.285 Flexible Data Placement Supported: Not Supported 00:11:43.285 00:11:43.285 Controller Memory Buffer Support 00:11:43.285 ================================ 00:11:43.285 Supported: No 00:11:43.285 00:11:43.285 Persistent Memory Region Support 00:11:43.285 ================================ 00:11:43.285 Supported: No 00:11:43.285 00:11:43.285 Admin Command Set Attributes 00:11:43.285 ============================ 00:11:43.285 Security Send/Receive: Not Supported 00:11:43.285 Format NVM: Supported 00:11:43.285 Firmware Activate/Download: Not Supported 00:11:43.285 Namespace Management: Supported 00:11:43.285 Device Self-Test: Not Supported 00:11:43.285 Directives: Supported 00:11:43.285 NVMe-MI: Not Supported 00:11:43.285 Virtualization Management: Not Supported 00:11:43.285 Doorbell Buffer Config: Supported 00:11:43.285 Get LBA Status Capability: Not Supported 00:11:43.285 Command & Feature Lockdown Capability: Not Supported 00:11:43.285 Abort Command Limit: 4 00:11:43.285 Async Event Request Limit: 4 00:11:43.285 Number of Firmware Slots: N/A 00:11:43.285 Firmware Slot 1 Read-Only: N/A 00:11:43.285 Firmware Activation Without Reset: N/A 00:11:43.285 Multiple Update Detection Support: N/A 00:11:43.285 Firmware Update Granularity: No Information Provided 00:11:43.285 Per-Namespace SMART Log: Yes 00:11:43.285 Asymmetric Namespace Access Log Page: Not Supported 00:11:43.285 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:43.285 Command Effects Log Page: Supported 00:11:43.285 Get Log Page Extended Data: Supported 00:11:43.285 Telemetry Log Pages: Not Supported 00:11:43.285 Persistent Event Log Pages: Not Supported 00:11:43.285 Supported Log Pages Log Page: May Support 00:11:43.285 Commands Supported & Effects Log Page: Not Supported 00:11:43.285 Feature Identifiers & Effects Log Page:May Support 00:11:43.285 NVMe-MI Commands & Effects Log Page: May Support 00:11:43.285 Data Area 4 for Telemetry Log: Not Supported 00:11:43.285 Error Log Page Entries Supported: 1 00:11:43.285 Keep Alive: Not Supported 00:11:43.285 00:11:43.285 NVM Command Set Attributes 00:11:43.285 ========================== 00:11:43.285 Submission Queue Entry Size 00:11:43.285 Max: 64 00:11:43.285 Min: 64 00:11:43.285 Completion Queue Entry Size 00:11:43.285 Max: 16 00:11:43.285 Min: 16 00:11:43.285 Number of Namespaces: 256 00:11:43.285 Compare Command: Supported 00:11:43.285 Write Uncorrectable Command: Not Supported 00:11:43.285 Dataset Management Command: Supported 00:11:43.285 Write Zeroes Command: Supported 00:11:43.285 Set Features Save Field: Supported 00:11:43.285 Reservations: Not Supported 00:11:43.285 Timestamp: Supported 00:11:43.285 Copy: Supported 00:11:43.285 Volatile Write Cache: Present 00:11:43.285 Atomic Write Unit (Normal): 1 00:11:43.285 Atomic Write Unit (PFail): 1 00:11:43.285 Atomic Compare & Write Unit: 1 00:11:43.285 Fused Compare & Write: Not Supported 00:11:43.285 Scatter-Gather List 00:11:43.285 SGL Command Set: Supported 00:11:43.285 SGL Keyed: Not Supported 00:11:43.285 SGL Bit Bucket Descriptor: Not Supported 00:11:43.285 SGL Metadata Pointer: Not Supported 00:11:43.285 Oversized SGL: Not Supported 00:11:43.285 SGL Metadata Address: Not Supported 00:11:43.285 SGL Offset: Not Supported 00:11:43.285 Transport SGL Data Block: Not Supported 00:11:43.285 Replay Protected Memory Block: Not Supported 00:11:43.285 00:11:43.285 Firmware Slot Information 00:11:43.285 ========================= 00:11:43.285 Active slot: 1 00:11:43.285 Slot 1 Firmware Revision: 1.0 00:11:43.285 00:11:43.285 00:11:43.285 Commands Supported and Effects 00:11:43.285 ============================== 00:11:43.285 Admin Commands 00:11:43.285 -------------- 00:11:43.285 Delete I/O Submission Queue (00h): Supported 00:11:43.285 Create I/O Submission Queue (01h): Supported 00:11:43.285 Get Log Page (02h): Supported 00:11:43.285 Delete I/O Completion Queue (04h): Supported 00:11:43.285 Create I/O Completion Queue (05h): Supported 00:11:43.285 Identify (06h): Supported 00:11:43.285 Abort (08h): Supported 00:11:43.285 Set Features (09h): Supported 00:11:43.285 Get Features (0Ah): Supported 00:11:43.285 Asynchronous Event Request (0Ch): Supported 00:11:43.285 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:43.285 Directive Send (19h): Supported 00:11:43.285 Directive Receive (1Ah): Supported 00:11:43.285 Virtualization Management (1Ch): Supported 00:11:43.285 Doorbell Buffer Config (7Ch): Supported 00:11:43.285 Format NVM (80h): Supported LBA-Change 00:11:43.285 I/O Commands 00:11:43.285 ------------ 00:11:43.285 Flush (00h): Supported LBA-Change 00:11:43.285 Write (01h): Supported LBA-Change 00:11:43.285 Read (02h): Supported 00:11:43.285 Compare (05h): Supported 00:11:43.285 Write Zeroes (08h): Supported LBA-Change 00:11:43.285 Dataset Management (09h): Supported LBA-Change 00:11:43.285 Unknown (0Ch): Supported 00:11:43.285 Unknown (12h): Supported 00:11:43.285 Copy (19h): Supported LBA-Change 00:11:43.285 Unknown (1Dh): Supported LBA-Change 00:11:43.285 00:11:43.285 Error Log 00:11:43.285 ========= 00:11:43.285 00:11:43.285 Arbitration 00:11:43.285 =========== 00:11:43.285 Arbitration Burst: no limit 00:11:43.285 00:11:43.285 Power Management 00:11:43.285 ================ 00:11:43.285 Number of Power States: 1 00:11:43.285 Current Power State: Power State #0 00:11:43.285 Power State #0: 00:11:43.285 Max Power: 25.00 W 00:11:43.285 Non-Operational State: Operational 00:11:43.285 Entry Latency: 16 microseconds 00:11:43.285 Exit Latency: 4 microseconds 00:11:43.285 Relative Read Throughput: 0 00:11:43.285 Relative Read Latency: 0 00:11:43.285 Relative Write Throughput: 0 00:11:43.285 Relative Write Latency: 0 00:11:43.285 Idle Power: Not Reported 00:11:43.285 Active Power: Not Reported 00:11:43.285 Non-Operational Permissive Mode: Not Supported 00:11:43.285 00:11:43.285 Health Information 00:11:43.285 ================== 00:11:43.285 Critical Warnings: 00:11:43.285 Available Spare Space: OK 00:11:43.285 Temperature: OK 00:11:43.285 Device Reliability: OK 00:11:43.285 Read Only: No 00:11:43.285 Volatile Memory Backup: OK 00:11:43.285 Current Temperature: 323 Kelvin (50 Celsius) 00:11:43.285 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:43.285 Available Spare: 0% 00:11:43.285 Available Spare Threshold: 0% 00:11:43.285 Life Percentage Used: 0% 00:11:43.285 Data Units Read: 2136 00:11:43.285 Data Units Written: 1924 00:11:43.285 Host Read Commands: 111491 00:11:43.285 Host Write Commands: 109760 00:11:43.285 Controller Busy Time: 0 minutes 00:11:43.285 Power Cycles: 0 00:11:43.285 Power On Hours: 0 hours 00:11:43.285 Unsafe Shutdowns: 0 00:11:43.285 Unrecoverable Media Errors: 0 00:11:43.285 Lifetime Error Log Entries: 0 00:11:43.285 Warning Temperature Time: 0 minutes 00:11:43.285 Critical Temperature Time: 0 minutes 00:11:43.285 00:11:43.285 Number of Queues 00:11:43.285 ================ 00:11:43.285 Number of I/O Submission Queues: 64 00:11:43.285 Number of I/O Completion Queues: 64 00:11:43.285 00:11:43.285 ZNS Specific Controller Data 00:11:43.285 ============================ 00:11:43.285 Zone Append Size Limit: 0 00:11:43.285 00:11:43.285 00:11:43.285 Active Namespaces 00:11:43.285 ================= 00:11:43.285 Namespace ID:1 00:11:43.285 Error Recovery Timeout: Unlimited 00:11:43.285 Command Set Identifier: NVM (00h) 00:11:43.285 Deallocate: Supported 00:11:43.285 Deallocated/Unwritten Error: Supported 00:11:43.285 Deallocated Read Value: All 0x00 00:11:43.285 Deallocate in Write Zeroes: Not Supported 00:11:43.285 Deallocated Guard Field: 0xFFFF 00:11:43.285 Flush: Supported 00:11:43.285 Reservation: Not Supported 00:11:43.285 Namespace Sharing Capabilities: Private 00:11:43.285 Size (in LBAs): 1048576 (4GiB) 00:11:43.285 Capacity (in LBAs): 1048576 (4GiB) 00:11:43.285 Utilization (in LBAs): 1048576 (4GiB) 00:11:43.285 Thin Provisioning: Not Supported 00:11:43.285 Per-NS Atomic Units: No 00:11:43.285 Maximum Single Source Range Length: 128 00:11:43.285 Maximum Copy Length: 128 00:11:43.285 Maximum Source Range Count: 128 00:11:43.285 NGUID/EUI64 Never Reused: No 00:11:43.285 Namespace Write Protected: No 00:11:43.285 Number of LBA Formats: 8 00:11:43.285 Current LBA Format: LBA Format #04 00:11:43.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.285 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:43.285 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:43.285 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:43.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:43.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:43.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:43.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:43.286 00:11:43.286 NVM Specific Namespace Data 00:11:43.286 =========================== 00:11:43.286 Logical Block Storage Tag Mask: 0 00:11:43.286 Protection Information Capabilities: 00:11:43.286 16b Guard Protection Information Storage Tag Support: No 00:11:43.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:43.286 Storage Tag Check Read Support: No 00:11:43.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Namespace ID:2 00:11:43.286 Error Recovery Timeout: Unlimited 00:11:43.286 Command Set Identifier: NVM (00h) 00:11:43.286 Deallocate: Supported 00:11:43.286 Deallocated/Unwritten Error: Supported 00:11:43.286 Deallocated Read Value: All 0x00 00:11:43.286 Deallocate in Write Zeroes: Not Supported 00:11:43.286 Deallocated Guard Field: 0xFFFF 00:11:43.286 Flush: Supported 00:11:43.286 Reservation: Not Supported 00:11:43.286 Namespace Sharing Capabilities: Private 00:11:43.286 Size (in LBAs): 1048576 (4GiB) 00:11:43.286 Capacity (in LBAs): 1048576 (4GiB) 00:11:43.286 Utilization (in LBAs): 1048576 (4GiB) 00:11:43.286 Thin Provisioning: Not Supported 00:11:43.286 Per-NS Atomic Units: No 00:11:43.286 Maximum Single Source Range Length: 128 00:11:43.286 Maximum Copy Length: 128 00:11:43.286 Maximum Source Range Count: 128 00:11:43.286 NGUID/EUI64 Never Reused: No 00:11:43.286 Namespace Write Protected: No 00:11:43.286 Number of LBA Formats: 8 00:11:43.286 Current LBA Format: LBA Format #04 00:11:43.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:43.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:43.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:43.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:43.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:43.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:43.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:43.286 00:11:43.286 NVM Specific Namespace Data 00:11:43.286 =========================== 00:11:43.286 Logical Block Storage Tag Mask: 0 00:11:43.286 Protection Information Capabilities: 00:11:43.286 16b Guard Protection Information Storage Tag Support: No 00:11:43.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:43.286 Storage Tag Check Read Support: No 00:11:43.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Namespace ID:3 00:11:43.286 Error Recovery Timeout: Unlimited 00:11:43.286 Command Set Identifier: NVM (00h) 00:11:43.286 Deallocate: Supported 00:11:43.286 Deallocated/Unwritten Error: Supported 00:11:43.286 Deallocated Read Value: All 0x00 00:11:43.286 Deallocate in Write Zeroes: Not Supported 00:11:43.286 Deallocated Guard Field: 0xFFFF 00:11:43.286 Flush: Supported 00:11:43.286 Reservation: Not Supported 00:11:43.286 Namespace Sharing Capabilities: Private 00:11:43.286 Size (in LBAs): 1048576 (4GiB) 00:11:43.286 Capacity (in LBAs): 1048576 (4GiB) 00:11:43.286 Utilization (in LBAs): 1048576 (4GiB) 00:11:43.286 Thin Provisioning: Not Supported 00:11:43.286 Per-NS Atomic Units: No 00:11:43.286 Maximum Single Source Range Length: 128 00:11:43.286 Maximum Copy Length: 128 00:11:43.286 Maximum Source Range Count: 128 00:11:43.286 NGUID/EUI64 Never Reused: No 00:11:43.286 Namespace Write Protected: No 00:11:43.286 Number of LBA Formats: 8 00:11:43.286 Current LBA Format: LBA Format #04 00:11:43.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:43.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:43.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:43.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:43.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:43.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:43.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:43.286 00:11:43.286 NVM Specific Namespace Data 00:11:43.286 =========================== 00:11:43.286 Logical Block Storage Tag Mask: 0 00:11:43.286 Protection Information Capabilities: 00:11:43.286 16b Guard Protection Information Storage Tag Support: No 00:11:43.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:43.286 Storage Tag Check Read Support: No 00:11:43.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.286 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:43.286 13:31:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:43.549 ===================================================== 00:11:43.549 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:43.549 ===================================================== 00:11:43.549 Controller Capabilities/Features 00:11:43.549 ================================ 00:11:43.549 Vendor ID: 1b36 00:11:43.549 Subsystem Vendor ID: 1af4 00:11:43.549 Serial Number: 12343 00:11:43.549 Model Number: QEMU NVMe Ctrl 00:11:43.549 Firmware Version: 8.0.0 00:11:43.549 Recommended Arb Burst: 6 00:11:43.549 IEEE OUI Identifier: 00 54 52 00:11:43.549 Multi-path I/O 00:11:43.549 May have multiple subsystem ports: No 00:11:43.549 May have multiple controllers: Yes 00:11:43.549 Associated with SR-IOV VF: No 00:11:43.549 Max Data Transfer Size: 524288 00:11:43.549 Max Number of Namespaces: 256 00:11:43.549 Max Number of I/O Queues: 64 00:11:43.549 NVMe Specification Version (VS): 1.4 00:11:43.549 NVMe Specification Version (Identify): 1.4 00:11:43.549 Maximum Queue Entries: 2048 00:11:43.549 Contiguous Queues Required: Yes 00:11:43.549 Arbitration Mechanisms Supported 00:11:43.549 Weighted Round Robin: Not Supported 00:11:43.549 Vendor Specific: Not Supported 00:11:43.549 Reset Timeout: 7500 ms 00:11:43.549 Doorbell Stride: 4 bytes 00:11:43.549 NVM Subsystem Reset: Not Supported 00:11:43.549 Command Sets Supported 00:11:43.549 NVM Command Set: Supported 00:11:43.549 Boot Partition: Not Supported 00:11:43.549 Memory Page Size Minimum: 4096 bytes 00:11:43.549 Memory Page Size Maximum: 65536 bytes 00:11:43.549 Persistent Memory Region: Not Supported 00:11:43.549 Optional Asynchronous Events Supported 00:11:43.549 Namespace Attribute Notices: Supported 00:11:43.549 Firmware Activation Notices: Not Supported 00:11:43.549 ANA Change Notices: Not Supported 00:11:43.549 PLE Aggregate Log Change Notices: Not Supported 00:11:43.550 LBA Status Info Alert Notices: Not Supported 00:11:43.550 EGE Aggregate Log Change Notices: Not Supported 00:11:43.550 Normal NVM Subsystem Shutdown event: Not Supported 00:11:43.550 Zone Descriptor Change Notices: Not Supported 00:11:43.550 Discovery Log Change Notices: Not Supported 00:11:43.550 Controller Attributes 00:11:43.550 128-bit Host Identifier: Not Supported 00:11:43.550 Non-Operational Permissive Mode: Not Supported 00:11:43.550 NVM Sets: Not Supported 00:11:43.550 Read Recovery Levels: Not Supported 00:11:43.550 Endurance Groups: Supported 00:11:43.550 Predictable Latency Mode: Not Supported 00:11:43.550 Traffic Based Keep ALive: Not Supported 00:11:43.550 Namespace Granularity: Not Supported 00:11:43.550 SQ Associations: Not Supported 00:11:43.550 UUID List: Not Supported 00:11:43.550 Multi-Domain Subsystem: Not Supported 00:11:43.550 Fixed Capacity Management: Not Supported 00:11:43.550 Variable Capacity Management: Not Supported 00:11:43.550 Delete Endurance Group: Not Supported 00:11:43.550 Delete NVM Set: Not Supported 00:11:43.550 Extended LBA Formats Supported: Supported 00:11:43.550 Flexible Data Placement Supported: Supported 00:11:43.550 00:11:43.550 Controller Memory Buffer Support 00:11:43.550 ================================ 00:11:43.550 Supported: No 00:11:43.550 00:11:43.550 Persistent Memory Region Support 00:11:43.550 ================================ 00:11:43.550 Supported: No 00:11:43.550 00:11:43.550 Admin Command Set Attributes 00:11:43.550 ============================ 00:11:43.550 Security Send/Receive: Not Supported 00:11:43.550 Format NVM: Supported 00:11:43.550 Firmware Activate/Download: Not Supported 00:11:43.550 Namespace Management: Supported 00:11:43.550 Device Self-Test: Not Supported 00:11:43.550 Directives: Supported 00:11:43.550 NVMe-MI: Not Supported 00:11:43.550 Virtualization Management: Not Supported 00:11:43.550 Doorbell Buffer Config: Supported 00:11:43.550 Get LBA Status Capability: Not Supported 00:11:43.550 Command & Feature Lockdown Capability: Not Supported 00:11:43.550 Abort Command Limit: 4 00:11:43.550 Async Event Request Limit: 4 00:11:43.550 Number of Firmware Slots: N/A 00:11:43.550 Firmware Slot 1 Read-Only: N/A 00:11:43.550 Firmware Activation Without Reset: N/A 00:11:43.550 Multiple Update Detection Support: N/A 00:11:43.550 Firmware Update Granularity: No Information Provided 00:11:43.550 Per-Namespace SMART Log: Yes 00:11:43.550 Asymmetric Namespace Access Log Page: Not Supported 00:11:43.550 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:43.550 Command Effects Log Page: Supported 00:11:43.550 Get Log Page Extended Data: Supported 00:11:43.550 Telemetry Log Pages: Not Supported 00:11:43.550 Persistent Event Log Pages: Not Supported 00:11:43.550 Supported Log Pages Log Page: May Support 00:11:43.550 Commands Supported & Effects Log Page: Not Supported 00:11:43.550 Feature Identifiers & Effects Log Page:May Support 00:11:43.550 NVMe-MI Commands & Effects Log Page: May Support 00:11:43.550 Data Area 4 for Telemetry Log: Not Supported 00:11:43.550 Error Log Page Entries Supported: 1 00:11:43.550 Keep Alive: Not Supported 00:11:43.550 00:11:43.550 NVM Command Set Attributes 00:11:43.550 ========================== 00:11:43.550 Submission Queue Entry Size 00:11:43.550 Max: 64 00:11:43.550 Min: 64 00:11:43.550 Completion Queue Entry Size 00:11:43.550 Max: 16 00:11:43.550 Min: 16 00:11:43.550 Number of Namespaces: 256 00:11:43.550 Compare Command: Supported 00:11:43.550 Write Uncorrectable Command: Not Supported 00:11:43.550 Dataset Management Command: Supported 00:11:43.550 Write Zeroes Command: Supported 00:11:43.550 Set Features Save Field: Supported 00:11:43.550 Reservations: Not Supported 00:11:43.550 Timestamp: Supported 00:11:43.550 Copy: Supported 00:11:43.550 Volatile Write Cache: Present 00:11:43.550 Atomic Write Unit (Normal): 1 00:11:43.550 Atomic Write Unit (PFail): 1 00:11:43.550 Atomic Compare & Write Unit: 1 00:11:43.550 Fused Compare & Write: Not Supported 00:11:43.550 Scatter-Gather List 00:11:43.550 SGL Command Set: Supported 00:11:43.550 SGL Keyed: Not Supported 00:11:43.550 SGL Bit Bucket Descriptor: Not Supported 00:11:43.550 SGL Metadata Pointer: Not Supported 00:11:43.550 Oversized SGL: Not Supported 00:11:43.550 SGL Metadata Address: Not Supported 00:11:43.550 SGL Offset: Not Supported 00:11:43.550 Transport SGL Data Block: Not Supported 00:11:43.550 Replay Protected Memory Block: Not Supported 00:11:43.550 00:11:43.550 Firmware Slot Information 00:11:43.550 ========================= 00:11:43.550 Active slot: 1 00:11:43.550 Slot 1 Firmware Revision: 1.0 00:11:43.550 00:11:43.550 00:11:43.550 Commands Supported and Effects 00:11:43.550 ============================== 00:11:43.550 Admin Commands 00:11:43.550 -------------- 00:11:43.550 Delete I/O Submission Queue (00h): Supported 00:11:43.550 Create I/O Submission Queue (01h): Supported 00:11:43.550 Get Log Page (02h): Supported 00:11:43.550 Delete I/O Completion Queue (04h): Supported 00:11:43.550 Create I/O Completion Queue (05h): Supported 00:11:43.550 Identify (06h): Supported 00:11:43.550 Abort (08h): Supported 00:11:43.550 Set Features (09h): Supported 00:11:43.550 Get Features (0Ah): Supported 00:11:43.550 Asynchronous Event Request (0Ch): Supported 00:11:43.550 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:43.550 Directive Send (19h): Supported 00:11:43.550 Directive Receive (1Ah): Supported 00:11:43.550 Virtualization Management (1Ch): Supported 00:11:43.550 Doorbell Buffer Config (7Ch): Supported 00:11:43.550 Format NVM (80h): Supported LBA-Change 00:11:43.550 I/O Commands 00:11:43.550 ------------ 00:11:43.550 Flush (00h): Supported LBA-Change 00:11:43.550 Write (01h): Supported LBA-Change 00:11:43.550 Read (02h): Supported 00:11:43.550 Compare (05h): Supported 00:11:43.550 Write Zeroes (08h): Supported LBA-Change 00:11:43.550 Dataset Management (09h): Supported LBA-Change 00:11:43.550 Unknown (0Ch): Supported 00:11:43.550 Unknown (12h): Supported 00:11:43.550 Copy (19h): Supported LBA-Change 00:11:43.550 Unknown (1Dh): Supported LBA-Change 00:11:43.550 00:11:43.550 Error Log 00:11:43.550 ========= 00:11:43.550 00:11:43.550 Arbitration 00:11:43.550 =========== 00:11:43.550 Arbitration Burst: no limit 00:11:43.550 00:11:43.550 Power Management 00:11:43.550 ================ 00:11:43.550 Number of Power States: 1 00:11:43.550 Current Power State: Power State #0 00:11:43.550 Power State #0: 00:11:43.550 Max Power: 25.00 W 00:11:43.550 Non-Operational State: Operational 00:11:43.550 Entry Latency: 16 microseconds 00:11:43.550 Exit Latency: 4 microseconds 00:11:43.550 Relative Read Throughput: 0 00:11:43.550 Relative Read Latency: 0 00:11:43.550 Relative Write Throughput: 0 00:11:43.550 Relative Write Latency: 0 00:11:43.550 Idle Power: Not Reported 00:11:43.550 Active Power: Not Reported 00:11:43.550 Non-Operational Permissive Mode: Not Supported 00:11:43.550 00:11:43.550 Health Information 00:11:43.550 ================== 00:11:43.550 Critical Warnings: 00:11:43.550 Available Spare Space: OK 00:11:43.550 Temperature: OK 00:11:43.550 Device Reliability: OK 00:11:43.550 Read Only: No 00:11:43.550 Volatile Memory Backup: OK 00:11:43.550 Current Temperature: 323 Kelvin (50 Celsius) 00:11:43.550 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:43.550 Available Spare: 0% 00:11:43.550 Available Spare Threshold: 0% 00:11:43.550 Life Percentage Used: 0% 00:11:43.550 Data Units Read: 798 00:11:43.550 Data Units Written: 727 00:11:43.550 Host Read Commands: 38050 00:11:43.551 Host Write Commands: 37473 00:11:43.551 Controller Busy Time: 0 minutes 00:11:43.551 Power Cycles: 0 00:11:43.551 Power On Hours: 0 hours 00:11:43.551 Unsafe Shutdowns: 0 00:11:43.551 Unrecoverable Media Errors: 0 00:11:43.551 Lifetime Error Log Entries: 0 00:11:43.551 Warning Temperature Time: 0 minutes 00:11:43.551 Critical Temperature Time: 0 minutes 00:11:43.551 00:11:43.551 Number of Queues 00:11:43.551 ================ 00:11:43.551 Number of I/O Submission Queues: 64 00:11:43.551 Number of I/O Completion Queues: 64 00:11:43.551 00:11:43.551 ZNS Specific Controller Data 00:11:43.551 ============================ 00:11:43.551 Zone Append Size Limit: 0 00:11:43.551 00:11:43.551 00:11:43.551 Active Namespaces 00:11:43.551 ================= 00:11:43.551 Namespace ID:1 00:11:43.551 Error Recovery Timeout: Unlimited 00:11:43.551 Command Set Identifier: NVM (00h) 00:11:43.551 Deallocate: Supported 00:11:43.551 Deallocated/Unwritten Error: Supported 00:11:43.551 Deallocated Read Value: All 0x00 00:11:43.551 Deallocate in Write Zeroes: Not Supported 00:11:43.551 Deallocated Guard Field: 0xFFFF 00:11:43.551 Flush: Supported 00:11:43.551 Reservation: Not Supported 00:11:43.551 Namespace Sharing Capabilities: Multiple Controllers 00:11:43.551 Size (in LBAs): 262144 (1GiB) 00:11:43.551 Capacity (in LBAs): 262144 (1GiB) 00:11:43.551 Utilization (in LBAs): 262144 (1GiB) 00:11:43.551 Thin Provisioning: Not Supported 00:11:43.551 Per-NS Atomic Units: No 00:11:43.551 Maximum Single Source Range Length: 128 00:11:43.551 Maximum Copy Length: 128 00:11:43.551 Maximum Source Range Count: 128 00:11:43.551 NGUID/EUI64 Never Reused: No 00:11:43.551 Namespace Write Protected: No 00:11:43.551 Endurance group ID: 1 00:11:43.551 Number of LBA Formats: 8 00:11:43.551 Current LBA Format: LBA Format #04 00:11:43.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:43.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:43.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:43.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:43.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:43.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:43.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:43.551 00:11:43.551 Get Feature FDP: 00:11:43.551 ================ 00:11:43.551 Enabled: Yes 00:11:43.551 FDP configuration index: 0 00:11:43.551 00:11:43.551 FDP configurations log page 00:11:43.551 =========================== 00:11:43.551 Number of FDP configurations: 1 00:11:43.551 Version: 0 00:11:43.551 Size: 112 00:11:43.551 FDP Configuration Descriptor: 0 00:11:43.551 Descriptor Size: 96 00:11:43.551 Reclaim Group Identifier format: 2 00:11:43.551 FDP Volatile Write Cache: Not Present 00:11:43.551 FDP Configuration: Valid 00:11:43.551 Vendor Specific Size: 0 00:11:43.551 Number of Reclaim Groups: 2 00:11:43.551 Number of Recalim Unit Handles: 8 00:11:43.551 Max Placement Identifiers: 128 00:11:43.551 Number of Namespaces Suppprted: 256 00:11:43.551 Reclaim unit Nominal Size: 6000000 bytes 00:11:43.551 Estimated Reclaim Unit Time Limit: Not Reported 00:11:43.551 RUH Desc #000: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #001: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #002: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #003: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #004: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #005: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #006: RUH Type: Initially Isolated 00:11:43.551 RUH Desc #007: RUH Type: Initially Isolated 00:11:43.551 00:11:43.551 FDP reclaim unit handle usage log page 00:11:43.551 ====================================== 00:11:43.551 Number of Reclaim Unit Handles: 8 00:11:43.551 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:43.551 RUH Usage Desc #001: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #002: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #003: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #004: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #005: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #006: RUH Attributes: Unused 00:11:43.551 RUH Usage Desc #007: RUH Attributes: Unused 00:11:43.551 00:11:43.551 FDP statistics log page 00:11:43.551 ======================= 00:11:43.551 Host bytes with metadata written: 439525376 00:11:43.551 Media bytes with metadata written: 439578624 00:11:43.551 Media bytes erased: 0 00:11:43.551 00:11:43.551 FDP events log page 00:11:43.551 =================== 00:11:43.551 Number of FDP events: 0 00:11:43.551 00:11:43.551 NVM Specific Namespace Data 00:11:43.551 =========================== 00:11:43.551 Logical Block Storage Tag Mask: 0 00:11:43.551 Protection Information Capabilities: 00:11:43.551 16b Guard Protection Information Storage Tag Support: No 00:11:43.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:43.551 Storage Tag Check Read Support: No 00:11:43.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:43.551 00:11:43.551 real 0m1.240s 00:11:43.551 user 0m0.430s 00:11:43.551 sys 0m0.587s 00:11:43.551 13:31:42 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.551 ************************************ 00:11:43.551 END TEST nvme_identify 00:11:43.551 ************************************ 00:11:43.551 13:31:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:43.551 13:31:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:43.551 13:31:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.551 13:31:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.551 13:31:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:43.551 ************************************ 00:11:43.551 START TEST nvme_perf 00:11:43.551 ************************************ 00:11:43.551 13:31:42 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:43.551 13:31:42 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:44.940 Initializing NVMe Controllers 00:11:44.940 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:44.940 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:44.940 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:44.940 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:44.940 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:44.940 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:44.940 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:44.940 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:44.940 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:44.940 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:44.940 Initialization complete. Launching workers. 00:11:44.940 ======================================================== 00:11:44.940 Latency(us) 00:11:44.940 Device Information : IOPS MiB/s Average min max 00:11:44.940 PCIE (0000:00:11.0) NSID 1 from core 0: 8144.59 95.44 15759.51 11084.36 39429.49 00:11:44.940 PCIE (0000:00:13.0) NSID 1 from core 0: 8144.59 95.44 15743.27 11110.88 38140.88 00:11:44.940 PCIE (0000:00:10.0) NSID 1 from core 0: 8144.59 95.44 15723.94 11148.05 36916.87 00:11:44.940 PCIE (0000:00:12.0) NSID 1 from core 0: 8144.59 95.44 15701.79 11489.98 34983.84 00:11:44.940 PCIE (0000:00:12.0) NSID 2 from core 0: 8144.59 95.44 15677.85 11581.62 34256.80 00:11:44.940 PCIE (0000:00:12.0) NSID 3 from core 0: 8208.22 96.19 15532.52 10637.35 27419.92 00:11:44.940 ======================================================== 00:11:44.940 Total : 48931.17 573.41 15689.61 10637.35 39429.49 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 11594.831us 00:11:44.940 10.00000% : 12905.551us 00:11:44.940 25.00000% : 14014.622us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 17039.360us 00:11:44.940 90.00000% : 18350.080us 00:11:44.940 95.00000% : 19559.975us 00:11:44.940 98.00000% : 21072.345us 00:11:44.940 99.00000% : 32062.228us 00:11:44.940 99.50000% : 38515.003us 00:11:44.940 99.90000% : 39321.600us 00:11:44.940 99.99000% : 39523.249us 00:11:44.940 99.99900% : 39523.249us 00:11:44.940 99.99990% : 39523.249us 00:11:44.940 99.99999% : 39523.249us 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 11746.068us 00:11:44.940 10.00000% : 12905.551us 00:11:44.940 25.00000% : 14115.446us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 17039.360us 00:11:44.940 90.00000% : 18350.080us 00:11:44.940 95.00000% : 19459.151us 00:11:44.940 98.00000% : 20769.871us 00:11:44.940 99.00000% : 30650.683us 00:11:44.940 99.50000% : 37305.108us 00:11:44.940 99.90000% : 38111.705us 00:11:44.940 99.99000% : 38313.354us 00:11:44.940 99.99900% : 38313.354us 00:11:44.940 99.99990% : 38313.354us 00:11:44.940 99.99999% : 38313.354us 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 11746.068us 00:11:44.940 10.00000% : 13006.375us 00:11:44.940 25.00000% : 14115.446us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 16938.535us 00:11:44.940 90.00000% : 18350.080us 00:11:44.940 95.00000% : 19660.800us 00:11:44.940 98.00000% : 20971.520us 00:11:44.940 99.00000% : 29440.788us 00:11:44.940 99.50000% : 35893.563us 00:11:44.940 99.90000% : 36901.809us 00:11:44.940 99.99000% : 37103.458us 00:11:44.940 99.99900% : 37103.458us 00:11:44.940 99.99990% : 37103.458us 00:11:44.940 99.99999% : 37103.458us 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 11947.717us 00:11:44.940 10.00000% : 13006.375us 00:11:44.940 25.00000% : 14115.446us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 16837.711us 00:11:44.940 90.00000% : 18450.905us 00:11:44.940 95.00000% : 19862.449us 00:11:44.940 98.00000% : 21173.169us 00:11:44.940 99.00000% : 28230.892us 00:11:44.940 99.50000% : 34078.720us 00:11:44.940 99.90000% : 34885.317us 00:11:44.940 99.99000% : 35086.966us 00:11:44.940 99.99900% : 35086.966us 00:11:44.940 99.99990% : 35086.966us 00:11:44.940 99.99999% : 35086.966us 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 12048.542us 00:11:44.940 10.00000% : 13006.375us 00:11:44.940 25.00000% : 13913.797us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 16938.535us 00:11:44.940 90.00000% : 18450.905us 00:11:44.940 95.00000% : 19963.274us 00:11:44.940 98.00000% : 21273.994us 00:11:44.940 99.00000% : 27424.295us 00:11:44.940 99.50000% : 33473.772us 00:11:44.940 99.90000% : 34078.720us 00:11:44.940 99.99000% : 34280.369us 00:11:44.940 99.99900% : 34280.369us 00:11:44.940 99.99990% : 34280.369us 00:11:44.940 99.99999% : 34280.369us 00:11:44.940 00:11:44.940 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:44.940 ================================================================================= 00:11:44.940 1.00000% : 11292.357us 00:11:44.940 10.00000% : 12804.726us 00:11:44.940 25.00000% : 13913.797us 00:11:44.940 50.00000% : 15325.342us 00:11:44.940 75.00000% : 16938.535us 00:11:44.940 90.00000% : 18350.080us 00:11:44.940 95.00000% : 19559.975us 00:11:44.940 98.00000% : 20769.871us 00:11:44.940 99.00000% : 21374.818us 00:11:44.940 99.50000% : 26617.698us 00:11:44.940 99.90000% : 27424.295us 00:11:44.940 99.99000% : 27424.295us 00:11:44.940 99.99900% : 27424.295us 00:11:44.940 99.99990% : 27424.295us 00:11:44.940 99.99999% : 27424.295us 00:11:44.940 00:11:44.940 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:44.940 ============================================================================== 00:11:44.940 Range in us Cumulative IO count 00:11:44.940 11040.295 - 11090.708: 0.0122% ( 1) 00:11:44.940 11090.708 - 11141.120: 0.0366% ( 2) 00:11:44.940 11141.120 - 11191.532: 0.0854% ( 4) 00:11:44.940 11191.532 - 11241.945: 0.1587% ( 6) 00:11:44.940 11241.945 - 11292.357: 0.2441% ( 7) 00:11:44.940 11292.357 - 11342.769: 0.3296% ( 7) 00:11:44.940 11342.769 - 11393.182: 0.4517% ( 10) 00:11:44.940 11393.182 - 11443.594: 0.5615% ( 9) 00:11:44.940 11443.594 - 11494.006: 0.7568% ( 16) 00:11:44.940 11494.006 - 11544.418: 0.9155% ( 13) 00:11:44.940 11544.418 - 11594.831: 1.0742% ( 13) 00:11:44.940 11594.831 - 11645.243: 1.2329% ( 13) 00:11:44.940 11645.243 - 11695.655: 1.4282% ( 16) 00:11:44.940 11695.655 - 11746.068: 1.5747% ( 12) 00:11:44.940 11746.068 - 11796.480: 1.7822% ( 17) 00:11:44.940 11796.480 - 11846.892: 1.9775% ( 16) 00:11:44.940 11846.892 - 11897.305: 2.1729% ( 16) 00:11:44.940 11897.305 - 11947.717: 2.5024% ( 27) 00:11:44.940 11947.717 - 11998.129: 2.7222% ( 18) 00:11:44.940 11998.129 - 12048.542: 2.9419% ( 18) 00:11:44.940 12048.542 - 12098.954: 3.1494% ( 17) 00:11:44.940 12098.954 - 12149.366: 3.3569% ( 17) 00:11:44.940 12149.366 - 12199.778: 3.5645% ( 17) 00:11:44.940 12199.778 - 12250.191: 3.8696% ( 25) 00:11:44.940 12250.191 - 12300.603: 4.3213% ( 37) 00:11:44.940 12300.603 - 12351.015: 4.8218% ( 41) 00:11:44.940 12351.015 - 12401.428: 5.2734% ( 37) 00:11:44.940 12401.428 - 12451.840: 5.7617% ( 40) 00:11:44.940 12451.840 - 12502.252: 6.2866% ( 43) 00:11:44.940 12502.252 - 12552.665: 6.7871% ( 41) 00:11:44.941 12552.665 - 12603.077: 7.2754% ( 40) 00:11:44.941 12603.077 - 12653.489: 7.7881% ( 42) 00:11:44.941 12653.489 - 12703.902: 8.3252% ( 44) 00:11:44.941 12703.902 - 12754.314: 8.8135% ( 40) 00:11:44.941 12754.314 - 12804.726: 9.3872% ( 47) 00:11:44.941 12804.726 - 12855.138: 9.8877% ( 41) 00:11:44.941 12855.138 - 12905.551: 10.4736% ( 48) 00:11:44.941 12905.551 - 13006.375: 11.8042% ( 109) 00:11:44.941 13006.375 - 13107.200: 13.1226% ( 108) 00:11:44.941 13107.200 - 13208.025: 14.4043% ( 105) 00:11:44.941 13208.025 - 13308.849: 15.6128% ( 99) 00:11:44.941 13308.849 - 13409.674: 16.7847% ( 96) 00:11:44.941 13409.674 - 13510.498: 18.1274% ( 110) 00:11:44.941 13510.498 - 13611.323: 19.4702% ( 110) 00:11:44.941 13611.323 - 13712.148: 20.7520% ( 105) 00:11:44.941 13712.148 - 13812.972: 22.2412% ( 122) 00:11:44.941 13812.972 - 13913.797: 23.7549% ( 124) 00:11:44.941 13913.797 - 14014.622: 25.2441% ( 122) 00:11:44.941 14014.622 - 14115.446: 26.8188% ( 129) 00:11:44.941 14115.446 - 14216.271: 28.6133% ( 147) 00:11:44.941 14216.271 - 14317.095: 30.6763% ( 169) 00:11:44.941 14317.095 - 14417.920: 32.7148% ( 167) 00:11:44.941 14417.920 - 14518.745: 34.8267% ( 173) 00:11:44.941 14518.745 - 14619.569: 36.8652% ( 167) 00:11:44.941 14619.569 - 14720.394: 39.0381% ( 178) 00:11:44.941 14720.394 - 14821.218: 41.1987% ( 177) 00:11:44.941 14821.218 - 14922.043: 43.3228% ( 174) 00:11:44.941 14922.043 - 15022.868: 45.6787% ( 193) 00:11:44.941 15022.868 - 15123.692: 47.7295% ( 168) 00:11:44.941 15123.692 - 15224.517: 49.4629% ( 142) 00:11:44.941 15224.517 - 15325.342: 51.3184% ( 152) 00:11:44.941 15325.342 - 15426.166: 52.9297% ( 132) 00:11:44.941 15426.166 - 15526.991: 54.5044% ( 129) 00:11:44.941 15526.991 - 15627.815: 56.1768% ( 137) 00:11:44.941 15627.815 - 15728.640: 57.7515% ( 129) 00:11:44.941 15728.640 - 15829.465: 59.1797% ( 117) 00:11:44.941 15829.465 - 15930.289: 60.5347% ( 111) 00:11:44.941 15930.289 - 16031.114: 61.9751% ( 118) 00:11:44.941 16031.114 - 16131.938: 63.2446% ( 104) 00:11:44.941 16131.938 - 16232.763: 64.4043% ( 95) 00:11:44.941 16232.763 - 16333.588: 65.7715% ( 112) 00:11:44.941 16333.588 - 16434.412: 67.2119% ( 118) 00:11:44.941 16434.412 - 16535.237: 68.7256% ( 124) 00:11:44.941 16535.237 - 16636.062: 70.2148% ( 122) 00:11:44.941 16636.062 - 16736.886: 71.4966% ( 105) 00:11:44.941 16736.886 - 16837.711: 72.7539% ( 103) 00:11:44.941 16837.711 - 16938.535: 73.9380% ( 97) 00:11:44.941 16938.535 - 17039.360: 75.1831% ( 102) 00:11:44.941 17039.360 - 17140.185: 76.5991% ( 116) 00:11:44.941 17140.185 - 17241.009: 77.9663% ( 112) 00:11:44.941 17241.009 - 17341.834: 79.4800% ( 124) 00:11:44.941 17341.834 - 17442.658: 80.8838% ( 115) 00:11:44.941 17442.658 - 17543.483: 82.0679% ( 97) 00:11:44.941 17543.483 - 17644.308: 83.2886% ( 100) 00:11:44.941 17644.308 - 17745.132: 84.4360% ( 94) 00:11:44.941 17745.132 - 17845.957: 85.5469% ( 91) 00:11:44.941 17845.957 - 17946.782: 86.8164% ( 104) 00:11:44.941 17946.782 - 18047.606: 87.7930% ( 80) 00:11:44.941 18047.606 - 18148.431: 88.6963% ( 74) 00:11:44.941 18148.431 - 18249.255: 89.4165% ( 59) 00:11:44.941 18249.255 - 18350.080: 90.1611% ( 61) 00:11:44.941 18350.080 - 18450.905: 90.7349% ( 47) 00:11:44.941 18450.905 - 18551.729: 91.2842% ( 45) 00:11:44.941 18551.729 - 18652.554: 91.8213% ( 44) 00:11:44.941 18652.554 - 18753.378: 92.3218% ( 41) 00:11:44.941 18753.378 - 18854.203: 92.7490% ( 35) 00:11:44.941 18854.203 - 18955.028: 93.0908% ( 28) 00:11:44.941 18955.028 - 19055.852: 93.3716% ( 23) 00:11:44.941 19055.852 - 19156.677: 93.7134% ( 28) 00:11:44.941 19156.677 - 19257.502: 94.0796% ( 30) 00:11:44.941 19257.502 - 19358.326: 94.3604% ( 23) 00:11:44.941 19358.326 - 19459.151: 94.6655% ( 25) 00:11:44.941 19459.151 - 19559.975: 95.0195% ( 29) 00:11:44.941 19559.975 - 19660.800: 95.2637% ( 20) 00:11:44.941 19660.800 - 19761.625: 95.5566% ( 24) 00:11:44.941 19761.625 - 19862.449: 95.8496% ( 24) 00:11:44.941 19862.449 - 19963.274: 96.0938% ( 20) 00:11:44.941 19963.274 - 20064.098: 96.3257% ( 19) 00:11:44.941 20064.098 - 20164.923: 96.5454% ( 18) 00:11:44.941 20164.923 - 20265.748: 96.7773% ( 19) 00:11:44.941 20265.748 - 20366.572: 96.9360% ( 13) 00:11:44.941 20366.572 - 20467.397: 97.1069% ( 14) 00:11:44.941 20467.397 - 20568.222: 97.2900% ( 15) 00:11:44.941 20568.222 - 20669.046: 97.5220% ( 19) 00:11:44.941 20669.046 - 20769.871: 97.7295% ( 17) 00:11:44.941 20769.871 - 20870.695: 97.8638% ( 11) 00:11:44.941 20870.695 - 20971.520: 97.9614% ( 8) 00:11:44.941 20971.520 - 21072.345: 98.0469% ( 7) 00:11:44.941 21072.345 - 21173.169: 98.1201% ( 6) 00:11:44.941 21173.169 - 21273.994: 98.1934% ( 6) 00:11:44.941 21273.994 - 21374.818: 98.2666% ( 6) 00:11:44.941 21374.818 - 21475.643: 98.3521% ( 7) 00:11:44.941 21475.643 - 21576.468: 98.4131% ( 5) 00:11:44.941 21576.468 - 21677.292: 98.4375% ( 2) 00:11:44.941 30650.683 - 30852.332: 98.4619% ( 2) 00:11:44.941 30852.332 - 31053.982: 98.5596% ( 8) 00:11:44.941 31053.982 - 31255.631: 98.6694% ( 9) 00:11:44.941 31255.631 - 31457.280: 98.7671% ( 8) 00:11:44.941 31457.280 - 31658.929: 98.8770% ( 9) 00:11:44.941 31658.929 - 31860.578: 98.9746% ( 8) 00:11:44.941 31860.578 - 32062.228: 99.0845% ( 9) 00:11:44.941 32062.228 - 32263.877: 99.1943% ( 9) 00:11:44.941 32263.877 - 32465.526: 99.2188% ( 2) 00:11:44.941 37910.055 - 38111.705: 99.3042% ( 7) 00:11:44.941 38111.705 - 38313.354: 99.4141% ( 9) 00:11:44.941 38313.354 - 38515.003: 99.5117% ( 8) 00:11:44.941 38515.003 - 38716.652: 99.6094% ( 8) 00:11:44.941 38716.652 - 38918.302: 99.7192% ( 9) 00:11:44.941 38918.302 - 39119.951: 99.8291% ( 9) 00:11:44.941 39119.951 - 39321.600: 99.9390% ( 9) 00:11:44.941 39321.600 - 39523.249: 100.0000% ( 5) 00:11:44.941 00:11:44.941 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:44.941 ============================================================================== 00:11:44.941 Range in us Cumulative IO count 00:11:44.941 11090.708 - 11141.120: 0.0122% ( 1) 00:11:44.941 11141.120 - 11191.532: 0.0610% ( 4) 00:11:44.941 11191.532 - 11241.945: 0.0977% ( 3) 00:11:44.941 11241.945 - 11292.357: 0.1465% ( 4) 00:11:44.941 11292.357 - 11342.769: 0.1953% ( 4) 00:11:44.941 11342.769 - 11393.182: 0.2441% ( 4) 00:11:44.941 11393.182 - 11443.594: 0.3052% ( 5) 00:11:44.941 11443.594 - 11494.006: 0.4150% ( 9) 00:11:44.941 11494.006 - 11544.418: 0.5371% ( 10) 00:11:44.941 11544.418 - 11594.831: 0.6592% ( 10) 00:11:44.941 11594.831 - 11645.243: 0.7935% ( 11) 00:11:44.941 11645.243 - 11695.655: 0.9399% ( 12) 00:11:44.941 11695.655 - 11746.068: 1.1719% ( 19) 00:11:44.941 11746.068 - 11796.480: 1.3672% ( 16) 00:11:44.941 11796.480 - 11846.892: 1.5747% ( 17) 00:11:44.941 11846.892 - 11897.305: 1.7334% ( 13) 00:11:44.941 11897.305 - 11947.717: 1.9287% ( 16) 00:11:44.941 11947.717 - 11998.129: 2.1240% ( 16) 00:11:44.941 11998.129 - 12048.542: 2.3193% ( 16) 00:11:44.941 12048.542 - 12098.954: 2.5757% ( 21) 00:11:44.941 12098.954 - 12149.366: 2.8442% ( 22) 00:11:44.941 12149.366 - 12199.778: 3.2104% ( 30) 00:11:44.941 12199.778 - 12250.191: 3.5522% ( 28) 00:11:44.941 12250.191 - 12300.603: 4.0649% ( 42) 00:11:44.941 12300.603 - 12351.015: 4.5410% ( 39) 00:11:44.941 12351.015 - 12401.428: 4.9927% ( 37) 00:11:44.941 12401.428 - 12451.840: 5.3955% ( 33) 00:11:44.941 12451.840 - 12502.252: 5.8594% ( 38) 00:11:44.941 12502.252 - 12552.665: 6.3110% ( 37) 00:11:44.941 12552.665 - 12603.077: 6.7871% ( 39) 00:11:44.941 12603.077 - 12653.489: 7.2632% ( 39) 00:11:44.941 12653.489 - 12703.902: 7.7759% ( 42) 00:11:44.941 12703.902 - 12754.314: 8.3008% ( 43) 00:11:44.941 12754.314 - 12804.726: 8.8989% ( 49) 00:11:44.941 12804.726 - 12855.138: 9.4604% ( 46) 00:11:44.941 12855.138 - 12905.551: 10.0098% ( 45) 00:11:44.941 12905.551 - 13006.375: 11.2671% ( 103) 00:11:44.941 13006.375 - 13107.200: 12.4390% ( 96) 00:11:44.941 13107.200 - 13208.025: 13.6353% ( 98) 00:11:44.941 13208.025 - 13308.849: 14.7339% ( 90) 00:11:44.941 13308.849 - 13409.674: 15.8203% ( 89) 00:11:44.941 13409.674 - 13510.498: 17.0288% ( 99) 00:11:44.941 13510.498 - 13611.323: 18.2983% ( 104) 00:11:44.941 13611.323 - 13712.148: 19.6533% ( 111) 00:11:44.941 13712.148 - 13812.972: 21.3135% ( 136) 00:11:44.941 13812.972 - 13913.797: 22.9980% ( 138) 00:11:44.941 13913.797 - 14014.622: 24.8901% ( 155) 00:11:44.941 14014.622 - 14115.446: 26.6602% ( 145) 00:11:44.941 14115.446 - 14216.271: 28.5645% ( 156) 00:11:44.941 14216.271 - 14317.095: 30.5298% ( 161) 00:11:44.941 14317.095 - 14417.920: 32.3853% ( 152) 00:11:44.941 14417.920 - 14518.745: 34.2285% ( 151) 00:11:44.941 14518.745 - 14619.569: 36.3281% ( 172) 00:11:44.941 14619.569 - 14720.394: 38.5498% ( 182) 00:11:44.941 14720.394 - 14821.218: 40.7959% ( 184) 00:11:44.941 14821.218 - 14922.043: 43.1641% ( 194) 00:11:44.941 14922.043 - 15022.868: 45.3003% ( 175) 00:11:44.941 15022.868 - 15123.692: 47.3145% ( 165) 00:11:44.941 15123.692 - 15224.517: 49.1699% ( 152) 00:11:44.941 15224.517 - 15325.342: 50.8667% ( 139) 00:11:44.941 15325.342 - 15426.166: 52.7588% ( 155) 00:11:44.941 15426.166 - 15526.991: 54.6509% ( 155) 00:11:44.941 15526.991 - 15627.815: 56.7383% ( 171) 00:11:44.941 15627.815 - 15728.640: 58.4351% ( 139) 00:11:44.941 15728.640 - 15829.465: 59.8511% ( 116) 00:11:44.941 15829.465 - 15930.289: 60.9985% ( 94) 00:11:44.941 15930.289 - 16031.114: 62.1216% ( 92) 00:11:44.941 16031.114 - 16131.938: 63.3301% ( 99) 00:11:44.941 16131.938 - 16232.763: 64.5996% ( 104) 00:11:44.941 16232.763 - 16333.588: 65.6738% ( 88) 00:11:44.941 16333.588 - 16434.412: 66.8701% ( 98) 00:11:44.941 16434.412 - 16535.237: 68.0786% ( 99) 00:11:44.942 16535.237 - 16636.062: 69.3726% ( 106) 00:11:44.942 16636.062 - 16736.886: 70.7886% ( 116) 00:11:44.942 16736.886 - 16837.711: 72.2412% ( 119) 00:11:44.942 16837.711 - 16938.535: 73.6694% ( 117) 00:11:44.942 16938.535 - 17039.360: 75.2563% ( 130) 00:11:44.942 17039.360 - 17140.185: 76.9531% ( 139) 00:11:44.942 17140.185 - 17241.009: 78.6255% ( 137) 00:11:44.942 17241.009 - 17341.834: 80.1147% ( 122) 00:11:44.942 17341.834 - 17442.658: 81.4819% ( 112) 00:11:44.942 17442.658 - 17543.483: 82.8003% ( 108) 00:11:44.942 17543.483 - 17644.308: 83.9233% ( 92) 00:11:44.942 17644.308 - 17745.132: 85.0952% ( 96) 00:11:44.942 17745.132 - 17845.957: 86.2793% ( 97) 00:11:44.942 17845.957 - 17946.782: 87.3047% ( 84) 00:11:44.942 17946.782 - 18047.606: 88.2446% ( 77) 00:11:44.942 18047.606 - 18148.431: 88.9526% ( 58) 00:11:44.942 18148.431 - 18249.255: 89.6240% ( 55) 00:11:44.942 18249.255 - 18350.080: 90.2100% ( 48) 00:11:44.942 18350.080 - 18450.905: 90.7837% ( 47) 00:11:44.942 18450.905 - 18551.729: 91.3452% ( 46) 00:11:44.942 18551.729 - 18652.554: 91.8701% ( 43) 00:11:44.942 18652.554 - 18753.378: 92.4438% ( 47) 00:11:44.942 18753.378 - 18854.203: 92.9077% ( 38) 00:11:44.942 18854.203 - 18955.028: 93.1519% ( 20) 00:11:44.942 18955.028 - 19055.852: 93.4082% ( 21) 00:11:44.942 19055.852 - 19156.677: 93.7256% ( 26) 00:11:44.942 19156.677 - 19257.502: 94.0918% ( 30) 00:11:44.942 19257.502 - 19358.326: 94.5557% ( 38) 00:11:44.942 19358.326 - 19459.151: 95.0439% ( 40) 00:11:44.942 19459.151 - 19559.975: 95.4712% ( 35) 00:11:44.942 19559.975 - 19660.800: 95.8130% ( 28) 00:11:44.942 19660.800 - 19761.625: 96.1426% ( 27) 00:11:44.942 19761.625 - 19862.449: 96.4966% ( 29) 00:11:44.942 19862.449 - 19963.274: 96.7651% ( 22) 00:11:44.942 19963.274 - 20064.098: 96.9849% ( 18) 00:11:44.942 20064.098 - 20164.923: 97.1924% ( 17) 00:11:44.942 20164.923 - 20265.748: 97.3633% ( 14) 00:11:44.942 20265.748 - 20366.572: 97.5464% ( 15) 00:11:44.942 20366.572 - 20467.397: 97.7051% ( 13) 00:11:44.942 20467.397 - 20568.222: 97.8394% ( 11) 00:11:44.942 20568.222 - 20669.046: 97.9736% ( 11) 00:11:44.942 20669.046 - 20769.871: 98.0591% ( 7) 00:11:44.942 20769.871 - 20870.695: 98.1323% ( 6) 00:11:44.942 20870.695 - 20971.520: 98.1934% ( 5) 00:11:44.942 20971.520 - 21072.345: 98.2666% ( 6) 00:11:44.942 21072.345 - 21173.169: 98.3398% ( 6) 00:11:44.942 21173.169 - 21273.994: 98.4131% ( 6) 00:11:44.942 21273.994 - 21374.818: 98.4375% ( 2) 00:11:44.942 29440.788 - 29642.437: 98.5107% ( 6) 00:11:44.942 29642.437 - 29844.086: 98.6206% ( 9) 00:11:44.942 29844.086 - 30045.735: 98.7305% ( 9) 00:11:44.942 30045.735 - 30247.385: 98.8281% ( 8) 00:11:44.942 30247.385 - 30449.034: 98.9380% ( 9) 00:11:44.942 30449.034 - 30650.683: 99.0356% ( 8) 00:11:44.942 30650.683 - 30852.332: 99.1333% ( 8) 00:11:44.942 30852.332 - 31053.982: 99.2188% ( 7) 00:11:44.942 36498.511 - 36700.160: 99.2798% ( 5) 00:11:44.942 36700.160 - 36901.809: 99.3774% ( 8) 00:11:44.942 36901.809 - 37103.458: 99.4629% ( 7) 00:11:44.942 37103.458 - 37305.108: 99.5605% ( 8) 00:11:44.942 37305.108 - 37506.757: 99.6582% ( 8) 00:11:44.942 37506.757 - 37708.406: 99.7681% ( 9) 00:11:44.942 37708.406 - 37910.055: 99.8657% ( 8) 00:11:44.942 37910.055 - 38111.705: 99.9756% ( 9) 00:11:44.942 38111.705 - 38313.354: 100.0000% ( 2) 00:11:44.942 00:11:44.942 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:44.942 ============================================================================== 00:11:44.942 Range in us Cumulative IO count 00:11:44.942 11141.120 - 11191.532: 0.0244% ( 2) 00:11:44.942 11191.532 - 11241.945: 0.0854% ( 5) 00:11:44.942 11241.945 - 11292.357: 0.1221% ( 3) 00:11:44.942 11292.357 - 11342.769: 0.2319% ( 9) 00:11:44.942 11342.769 - 11393.182: 0.2563% ( 2) 00:11:44.942 11393.182 - 11443.594: 0.3662% ( 9) 00:11:44.942 11443.594 - 11494.006: 0.4395% ( 6) 00:11:44.942 11494.006 - 11544.418: 0.5005% ( 5) 00:11:44.942 11544.418 - 11594.831: 0.6714% ( 14) 00:11:44.942 11594.831 - 11645.243: 0.8057% ( 11) 00:11:44.942 11645.243 - 11695.655: 0.9155% ( 9) 00:11:44.942 11695.655 - 11746.068: 1.0010% ( 7) 00:11:44.942 11746.068 - 11796.480: 1.3794% ( 31) 00:11:44.942 11796.480 - 11846.892: 1.4648% ( 7) 00:11:44.942 11846.892 - 11897.305: 1.5747% ( 9) 00:11:44.942 11897.305 - 11947.717: 1.7212% ( 12) 00:11:44.942 11947.717 - 11998.129: 1.9043% ( 15) 00:11:44.942 11998.129 - 12048.542: 2.1240% ( 18) 00:11:44.942 12048.542 - 12098.954: 2.3315% ( 17) 00:11:44.942 12098.954 - 12149.366: 2.7222% ( 32) 00:11:44.942 12149.366 - 12199.778: 3.1128% ( 32) 00:11:44.942 12199.778 - 12250.191: 3.4180% ( 25) 00:11:44.942 12250.191 - 12300.603: 3.7720% ( 29) 00:11:44.942 12300.603 - 12351.015: 4.1748% ( 33) 00:11:44.942 12351.015 - 12401.428: 4.4922% ( 26) 00:11:44.942 12401.428 - 12451.840: 4.9683% ( 39) 00:11:44.942 12451.840 - 12502.252: 5.4077% ( 36) 00:11:44.942 12502.252 - 12552.665: 5.8838% ( 39) 00:11:44.942 12552.665 - 12603.077: 6.4087% ( 43) 00:11:44.942 12603.077 - 12653.489: 6.8359% ( 35) 00:11:44.942 12653.489 - 12703.902: 7.3242% ( 40) 00:11:44.942 12703.902 - 12754.314: 7.7881% ( 38) 00:11:44.942 12754.314 - 12804.726: 8.2764% ( 40) 00:11:44.942 12804.726 - 12855.138: 8.6914% ( 34) 00:11:44.942 12855.138 - 12905.551: 9.1553% ( 38) 00:11:44.942 12905.551 - 13006.375: 10.5957% ( 118) 00:11:44.942 13006.375 - 13107.200: 11.9141% ( 108) 00:11:44.942 13107.200 - 13208.025: 12.9883% ( 88) 00:11:44.942 13208.025 - 13308.849: 14.0137% ( 84) 00:11:44.942 13308.849 - 13409.674: 15.1855% ( 96) 00:11:44.942 13409.674 - 13510.498: 16.4062% ( 100) 00:11:44.942 13510.498 - 13611.323: 17.7368% ( 109) 00:11:44.942 13611.323 - 13712.148: 19.0796% ( 110) 00:11:44.942 13712.148 - 13812.972: 20.8740% ( 147) 00:11:44.942 13812.972 - 13913.797: 22.4854% ( 132) 00:11:44.942 13913.797 - 14014.622: 23.9868% ( 123) 00:11:44.942 14014.622 - 14115.446: 26.0864% ( 172) 00:11:44.942 14115.446 - 14216.271: 28.0151% ( 158) 00:11:44.942 14216.271 - 14317.095: 29.9316% ( 157) 00:11:44.942 14317.095 - 14417.920: 31.9214% ( 163) 00:11:44.942 14417.920 - 14518.745: 34.1431% ( 182) 00:11:44.942 14518.745 - 14619.569: 36.3892% ( 184) 00:11:44.942 14619.569 - 14720.394: 38.4399% ( 168) 00:11:44.942 14720.394 - 14821.218: 40.4419% ( 164) 00:11:44.942 14821.218 - 14922.043: 42.5293% ( 171) 00:11:44.942 14922.043 - 15022.868: 44.7388% ( 181) 00:11:44.942 15022.868 - 15123.692: 46.9604% ( 182) 00:11:44.942 15123.692 - 15224.517: 49.0723% ( 173) 00:11:44.942 15224.517 - 15325.342: 50.8057% ( 142) 00:11:44.942 15325.342 - 15426.166: 52.8076% ( 164) 00:11:44.942 15426.166 - 15526.991: 54.6875% ( 154) 00:11:44.942 15526.991 - 15627.815: 56.3110% ( 133) 00:11:44.942 15627.815 - 15728.640: 58.0322% ( 141) 00:11:44.942 15728.640 - 15829.465: 59.4482% ( 116) 00:11:44.942 15829.465 - 15930.289: 60.8887% ( 118) 00:11:44.942 15930.289 - 16031.114: 62.0972% ( 99) 00:11:44.942 16031.114 - 16131.938: 63.5864% ( 122) 00:11:44.942 16131.938 - 16232.763: 64.9902% ( 115) 00:11:44.942 16232.763 - 16333.588: 66.2720% ( 105) 00:11:44.942 16333.588 - 16434.412: 67.7368% ( 120) 00:11:44.942 16434.412 - 16535.237: 69.1040% ( 112) 00:11:44.942 16535.237 - 16636.062: 70.5444% ( 118) 00:11:44.942 16636.062 - 16736.886: 72.3633% ( 149) 00:11:44.942 16736.886 - 16837.711: 73.7793% ( 116) 00:11:44.942 16837.711 - 16938.535: 75.4639% ( 138) 00:11:44.942 16938.535 - 17039.360: 77.0508% ( 130) 00:11:44.942 17039.360 - 17140.185: 78.5522% ( 123) 00:11:44.942 17140.185 - 17241.009: 80.1392% ( 130) 00:11:44.942 17241.009 - 17341.834: 81.3965% ( 103) 00:11:44.942 17341.834 - 17442.658: 82.8857% ( 122) 00:11:44.942 17442.658 - 17543.483: 84.1797% ( 106) 00:11:44.942 17543.483 - 17644.308: 85.2173% ( 85) 00:11:44.942 17644.308 - 17745.132: 86.3159% ( 90) 00:11:44.942 17745.132 - 17845.957: 87.2925% ( 80) 00:11:44.942 17845.957 - 17946.782: 87.9639% ( 55) 00:11:44.942 17946.782 - 18047.606: 88.5498% ( 48) 00:11:44.942 18047.606 - 18148.431: 89.1846% ( 52) 00:11:44.942 18148.431 - 18249.255: 89.8193% ( 52) 00:11:44.942 18249.255 - 18350.080: 90.5151% ( 57) 00:11:44.942 18350.080 - 18450.905: 91.0034% ( 40) 00:11:44.942 18450.905 - 18551.729: 91.4673% ( 38) 00:11:44.942 18551.729 - 18652.554: 91.8091% ( 28) 00:11:44.942 18652.554 - 18753.378: 92.0654% ( 21) 00:11:44.942 18753.378 - 18854.203: 92.6270% ( 46) 00:11:44.942 18854.203 - 18955.028: 93.0176% ( 32) 00:11:44.942 18955.028 - 19055.852: 93.3594% ( 28) 00:11:44.942 19055.852 - 19156.677: 93.7378% ( 31) 00:11:44.942 19156.677 - 19257.502: 94.0430% ( 25) 00:11:44.942 19257.502 - 19358.326: 94.3359% ( 24) 00:11:44.942 19358.326 - 19459.151: 94.6289% ( 24) 00:11:44.942 19459.151 - 19559.975: 94.9585% ( 27) 00:11:44.942 19559.975 - 19660.800: 95.3491% ( 32) 00:11:44.942 19660.800 - 19761.625: 95.6421% ( 24) 00:11:44.942 19761.625 - 19862.449: 95.7642% ( 10) 00:11:44.942 19862.449 - 19963.274: 95.9717% ( 17) 00:11:44.942 19963.274 - 20064.098: 96.3867% ( 34) 00:11:44.942 20064.098 - 20164.923: 96.6675% ( 23) 00:11:44.942 20164.923 - 20265.748: 96.8872% ( 18) 00:11:44.942 20265.748 - 20366.572: 97.0825% ( 16) 00:11:44.942 20366.572 - 20467.397: 97.3389% ( 21) 00:11:44.942 20467.397 - 20568.222: 97.4609% ( 10) 00:11:44.942 20568.222 - 20669.046: 97.6807% ( 18) 00:11:44.942 20669.046 - 20769.871: 97.7783% ( 8) 00:11:44.942 20769.871 - 20870.695: 97.9614% ( 15) 00:11:44.942 20870.695 - 20971.520: 98.0713% ( 9) 00:11:44.942 20971.520 - 21072.345: 98.1812% ( 9) 00:11:44.942 21072.345 - 21173.169: 98.2666% ( 7) 00:11:44.942 21173.169 - 21273.994: 98.3765% ( 9) 00:11:44.943 21273.994 - 21374.818: 98.4375% ( 5) 00:11:44.943 28029.243 - 28230.892: 98.4985% ( 5) 00:11:44.943 28230.892 - 28432.542: 98.5596% ( 5) 00:11:44.943 28432.542 - 28634.191: 98.7183% ( 13) 00:11:44.943 28634.191 - 28835.840: 98.7671% ( 4) 00:11:44.943 28835.840 - 29037.489: 98.8770% ( 9) 00:11:44.943 29037.489 - 29239.138: 98.9746% ( 8) 00:11:44.943 29239.138 - 29440.788: 99.0601% ( 7) 00:11:44.943 29440.788 - 29642.437: 99.1211% ( 5) 00:11:44.943 29642.437 - 29844.086: 99.2065% ( 7) 00:11:44.943 29844.086 - 30045.735: 99.2188% ( 1) 00:11:44.943 35086.966 - 35288.615: 99.2554% ( 3) 00:11:44.943 35288.615 - 35490.265: 99.3164% ( 5) 00:11:44.943 35490.265 - 35691.914: 99.4385% ( 10) 00:11:44.943 35691.914 - 35893.563: 99.5117% ( 6) 00:11:44.943 35893.563 - 36095.212: 99.6216% ( 9) 00:11:44.943 36095.212 - 36296.862: 99.7070% ( 7) 00:11:44.943 36296.862 - 36498.511: 99.7925% ( 7) 00:11:44.943 36498.511 - 36700.160: 99.8779% ( 7) 00:11:44.943 36700.160 - 36901.809: 99.9878% ( 9) 00:11:44.943 36901.809 - 37103.458: 100.0000% ( 1) 00:11:44.943 00:11:44.943 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:44.943 ============================================================================== 00:11:44.943 Range in us Cumulative IO count 00:11:44.943 11443.594 - 11494.006: 0.0122% ( 1) 00:11:44.943 11494.006 - 11544.418: 0.0732% ( 5) 00:11:44.943 11544.418 - 11594.831: 0.2075% ( 11) 00:11:44.943 11594.831 - 11645.243: 0.3052% ( 8) 00:11:44.943 11645.243 - 11695.655: 0.4028% ( 8) 00:11:44.943 11695.655 - 11746.068: 0.5005% ( 8) 00:11:44.943 11746.068 - 11796.480: 0.6226% ( 10) 00:11:44.943 11796.480 - 11846.892: 0.7568% ( 11) 00:11:44.943 11846.892 - 11897.305: 0.9033% ( 12) 00:11:44.943 11897.305 - 11947.717: 1.0498% ( 12) 00:11:44.943 11947.717 - 11998.129: 1.2207% ( 14) 00:11:44.943 11998.129 - 12048.542: 1.4160% ( 16) 00:11:44.943 12048.542 - 12098.954: 1.6724% ( 21) 00:11:44.943 12098.954 - 12149.366: 2.0142% ( 28) 00:11:44.943 12149.366 - 12199.778: 2.3804% ( 30) 00:11:44.943 12199.778 - 12250.191: 2.7588% ( 31) 00:11:44.943 12250.191 - 12300.603: 3.3203% ( 46) 00:11:44.943 12300.603 - 12351.015: 3.8818% ( 46) 00:11:44.943 12351.015 - 12401.428: 4.3457% ( 38) 00:11:44.943 12401.428 - 12451.840: 4.7852% ( 36) 00:11:44.943 12451.840 - 12502.252: 5.2246% ( 36) 00:11:44.943 12502.252 - 12552.665: 5.6519% ( 35) 00:11:44.943 12552.665 - 12603.077: 6.0669% ( 34) 00:11:44.943 12603.077 - 12653.489: 6.5430% ( 39) 00:11:44.943 12653.489 - 12703.902: 6.9946% ( 37) 00:11:44.943 12703.902 - 12754.314: 7.4829% ( 40) 00:11:44.943 12754.314 - 12804.726: 7.9712% ( 40) 00:11:44.943 12804.726 - 12855.138: 8.4473% ( 39) 00:11:44.943 12855.138 - 12905.551: 8.9966% ( 45) 00:11:44.943 12905.551 - 13006.375: 10.1318% ( 93) 00:11:44.943 13006.375 - 13107.200: 11.1816% ( 86) 00:11:44.943 13107.200 - 13208.025: 12.4512% ( 104) 00:11:44.943 13208.025 - 13308.849: 13.6841% ( 101) 00:11:44.943 13308.849 - 13409.674: 14.9902% ( 107) 00:11:44.943 13409.674 - 13510.498: 16.3818% ( 114) 00:11:44.943 13510.498 - 13611.323: 17.9077% ( 125) 00:11:44.943 13611.323 - 13712.148: 19.4458% ( 126) 00:11:44.943 13712.148 - 13812.972: 21.0327% ( 130) 00:11:44.943 13812.972 - 13913.797: 22.8516% ( 149) 00:11:44.943 13913.797 - 14014.622: 24.8901% ( 167) 00:11:44.943 14014.622 - 14115.446: 26.9287% ( 167) 00:11:44.943 14115.446 - 14216.271: 29.1016% ( 178) 00:11:44.943 14216.271 - 14317.095: 31.1035% ( 164) 00:11:44.943 14317.095 - 14417.920: 33.1543% ( 168) 00:11:44.943 14417.920 - 14518.745: 35.1685% ( 165) 00:11:44.943 14518.745 - 14619.569: 37.2314% ( 169) 00:11:44.943 14619.569 - 14720.394: 39.2944% ( 169) 00:11:44.943 14720.394 - 14821.218: 41.2354% ( 159) 00:11:44.943 14821.218 - 14922.043: 43.1274% ( 155) 00:11:44.943 14922.043 - 15022.868: 45.1294% ( 164) 00:11:44.943 15022.868 - 15123.692: 47.1558% ( 166) 00:11:44.943 15123.692 - 15224.517: 49.2188% ( 169) 00:11:44.943 15224.517 - 15325.342: 51.0254% ( 148) 00:11:44.943 15325.342 - 15426.166: 52.6733% ( 135) 00:11:44.943 15426.166 - 15526.991: 54.4434% ( 145) 00:11:44.943 15526.991 - 15627.815: 56.2744% ( 150) 00:11:44.943 15627.815 - 15728.640: 58.0444% ( 145) 00:11:44.943 15728.640 - 15829.465: 59.9243% ( 154) 00:11:44.943 15829.465 - 15930.289: 61.4624% ( 126) 00:11:44.943 15930.289 - 16031.114: 62.8662% ( 115) 00:11:44.943 16031.114 - 16131.938: 64.4287% ( 128) 00:11:44.943 16131.938 - 16232.763: 65.9180% ( 122) 00:11:44.943 16232.763 - 16333.588: 67.5537% ( 134) 00:11:44.943 16333.588 - 16434.412: 69.1895% ( 134) 00:11:44.943 16434.412 - 16535.237: 70.8130% ( 133) 00:11:44.943 16535.237 - 16636.062: 72.3389% ( 125) 00:11:44.943 16636.062 - 16736.886: 74.0234% ( 138) 00:11:44.943 16736.886 - 16837.711: 75.5615% ( 126) 00:11:44.943 16837.711 - 16938.535: 77.0630% ( 123) 00:11:44.943 16938.535 - 17039.360: 78.4546% ( 114) 00:11:44.943 17039.360 - 17140.185: 79.8462% ( 114) 00:11:44.943 17140.185 - 17241.009: 81.0303% ( 97) 00:11:44.943 17241.009 - 17341.834: 82.0801% ( 86) 00:11:44.943 17341.834 - 17442.658: 83.2397% ( 95) 00:11:44.943 17442.658 - 17543.483: 84.2041% ( 79) 00:11:44.943 17543.483 - 17644.308: 85.0830% ( 72) 00:11:44.943 17644.308 - 17745.132: 86.0107% ( 76) 00:11:44.943 17745.132 - 17845.957: 86.9141% ( 74) 00:11:44.943 17845.957 - 17946.782: 87.5854% ( 55) 00:11:44.943 17946.782 - 18047.606: 88.1104% ( 43) 00:11:44.943 18047.606 - 18148.431: 88.6108% ( 41) 00:11:44.943 18148.431 - 18249.255: 89.1357% ( 43) 00:11:44.943 18249.255 - 18350.080: 89.6729% ( 44) 00:11:44.943 18350.080 - 18450.905: 90.1733% ( 41) 00:11:44.943 18450.905 - 18551.729: 90.6128% ( 36) 00:11:44.943 18551.729 - 18652.554: 91.0645% ( 37) 00:11:44.943 18652.554 - 18753.378: 91.5894% ( 43) 00:11:44.943 18753.378 - 18854.203: 92.1021% ( 42) 00:11:44.943 18854.203 - 18955.028: 92.5293% ( 35) 00:11:44.943 18955.028 - 19055.852: 92.8711% ( 28) 00:11:44.943 19055.852 - 19156.677: 93.2129% ( 28) 00:11:44.943 19156.677 - 19257.502: 93.5669% ( 29) 00:11:44.943 19257.502 - 19358.326: 93.9087% ( 28) 00:11:44.943 19358.326 - 19459.151: 94.2383% ( 27) 00:11:44.943 19459.151 - 19559.975: 94.5312% ( 24) 00:11:44.943 19559.975 - 19660.800: 94.7388% ( 17) 00:11:44.943 19660.800 - 19761.625: 94.9097% ( 14) 00:11:44.943 19761.625 - 19862.449: 95.0928% ( 15) 00:11:44.943 19862.449 - 19963.274: 95.2637% ( 14) 00:11:44.943 19963.274 - 20064.098: 95.4346% ( 14) 00:11:44.943 20064.098 - 20164.923: 95.6543% ( 18) 00:11:44.943 20164.923 - 20265.748: 95.8984% ( 20) 00:11:44.943 20265.748 - 20366.572: 96.1548% ( 21) 00:11:44.943 20366.572 - 20467.397: 96.3745% ( 18) 00:11:44.943 20467.397 - 20568.222: 96.6309% ( 21) 00:11:44.943 20568.222 - 20669.046: 96.8872% ( 21) 00:11:44.943 20669.046 - 20769.871: 97.1191% ( 19) 00:11:44.943 20769.871 - 20870.695: 97.3755% ( 21) 00:11:44.943 20870.695 - 20971.520: 97.6318% ( 21) 00:11:44.943 20971.520 - 21072.345: 97.8638% ( 19) 00:11:44.943 21072.345 - 21173.169: 98.0469% ( 15) 00:11:44.943 21173.169 - 21273.994: 98.1812% ( 11) 00:11:44.943 21273.994 - 21374.818: 98.3032% ( 10) 00:11:44.943 21374.818 - 21475.643: 98.4131% ( 9) 00:11:44.943 21475.643 - 21576.468: 98.4375% ( 2) 00:11:44.943 26819.348 - 27020.997: 98.4863% ( 4) 00:11:44.943 27020.997 - 27222.646: 98.5840% ( 8) 00:11:44.943 27222.646 - 27424.295: 98.6816% ( 8) 00:11:44.943 27424.295 - 27625.945: 98.7915% ( 9) 00:11:44.943 27625.945 - 27827.594: 98.9014% ( 9) 00:11:44.943 27827.594 - 28029.243: 98.9990% ( 8) 00:11:44.943 28029.243 - 28230.892: 99.0967% ( 8) 00:11:44.943 28230.892 - 28432.542: 99.2065% ( 9) 00:11:44.943 28432.542 - 28634.191: 99.2188% ( 1) 00:11:44.943 33272.123 - 33473.772: 99.2310% ( 1) 00:11:44.943 33473.772 - 33675.422: 99.3286% ( 8) 00:11:44.943 33675.422 - 33877.071: 99.4263% ( 8) 00:11:44.943 33877.071 - 34078.720: 99.5361% ( 9) 00:11:44.943 34078.720 - 34280.369: 99.6338% ( 8) 00:11:44.943 34280.369 - 34482.018: 99.7437% ( 9) 00:11:44.943 34482.018 - 34683.668: 99.8413% ( 8) 00:11:44.943 34683.668 - 34885.317: 99.9390% ( 8) 00:11:44.943 34885.317 - 35086.966: 100.0000% ( 5) 00:11:44.943 00:11:44.943 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:44.943 ============================================================================== 00:11:44.943 Range in us Cumulative IO count 00:11:44.943 11544.418 - 11594.831: 0.0244% ( 2) 00:11:44.943 11594.831 - 11645.243: 0.0610% ( 3) 00:11:44.943 11645.243 - 11695.655: 0.0977% ( 3) 00:11:44.943 11695.655 - 11746.068: 0.1709% ( 6) 00:11:44.943 11746.068 - 11796.480: 0.2075% ( 3) 00:11:44.943 11796.480 - 11846.892: 0.2930% ( 7) 00:11:44.943 11846.892 - 11897.305: 0.4272% ( 11) 00:11:44.943 11897.305 - 11947.717: 0.5981% ( 14) 00:11:44.943 11947.717 - 11998.129: 0.8057% ( 17) 00:11:44.943 11998.129 - 12048.542: 1.0620% ( 21) 00:11:44.943 12048.542 - 12098.954: 1.3550% ( 24) 00:11:44.943 12098.954 - 12149.366: 1.6724% ( 26) 00:11:44.943 12149.366 - 12199.778: 1.9409% ( 22) 00:11:44.943 12199.778 - 12250.191: 2.2705% ( 27) 00:11:44.943 12250.191 - 12300.603: 2.7466% ( 39) 00:11:44.943 12300.603 - 12351.015: 3.2593% ( 42) 00:11:44.943 12351.015 - 12401.428: 3.7476% ( 40) 00:11:44.943 12401.428 - 12451.840: 4.2114% ( 38) 00:11:44.943 12451.840 - 12502.252: 4.7485% ( 44) 00:11:44.943 12502.252 - 12552.665: 5.3589% ( 50) 00:11:44.943 12552.665 - 12603.077: 5.9204% ( 46) 00:11:44.943 12603.077 - 12653.489: 6.5308% ( 50) 00:11:44.943 12653.489 - 12703.902: 7.1045% ( 47) 00:11:44.943 12703.902 - 12754.314: 7.7759% ( 55) 00:11:44.943 12754.314 - 12804.726: 8.4839% ( 58) 00:11:44.944 12804.726 - 12855.138: 9.1187% ( 52) 00:11:44.944 12855.138 - 12905.551: 9.7290% ( 50) 00:11:44.944 12905.551 - 13006.375: 11.1328% ( 115) 00:11:44.944 13006.375 - 13107.200: 12.6221% ( 122) 00:11:44.944 13107.200 - 13208.025: 14.1846% ( 128) 00:11:44.944 13208.025 - 13308.849: 15.5518% ( 112) 00:11:44.944 13308.849 - 13409.674: 16.8335% ( 105) 00:11:44.944 13409.674 - 13510.498: 18.2373% ( 115) 00:11:44.944 13510.498 - 13611.323: 19.8853% ( 135) 00:11:44.944 13611.323 - 13712.148: 21.5454% ( 136) 00:11:44.944 13712.148 - 13812.972: 23.3398% ( 147) 00:11:44.944 13812.972 - 13913.797: 25.2563% ( 157) 00:11:44.944 13913.797 - 14014.622: 27.3193% ( 169) 00:11:44.944 14014.622 - 14115.446: 29.0771% ( 144) 00:11:44.944 14115.446 - 14216.271: 30.7861% ( 140) 00:11:44.944 14216.271 - 14317.095: 32.4951% ( 140) 00:11:44.944 14317.095 - 14417.920: 34.5703% ( 170) 00:11:44.944 14417.920 - 14518.745: 36.3525% ( 146) 00:11:44.944 14518.745 - 14619.569: 38.1836% ( 150) 00:11:44.944 14619.569 - 14720.394: 40.0757% ( 155) 00:11:44.944 14720.394 - 14821.218: 41.9678% ( 155) 00:11:44.944 14821.218 - 14922.043: 44.0430% ( 170) 00:11:44.944 14922.043 - 15022.868: 46.0449% ( 164) 00:11:44.944 15022.868 - 15123.692: 47.9126% ( 153) 00:11:44.944 15123.692 - 15224.517: 49.5850% ( 137) 00:11:44.944 15224.517 - 15325.342: 51.4648% ( 154) 00:11:44.944 15325.342 - 15426.166: 53.2837% ( 149) 00:11:44.944 15426.166 - 15526.991: 54.8950% ( 132) 00:11:44.944 15526.991 - 15627.815: 56.5796% ( 138) 00:11:44.944 15627.815 - 15728.640: 58.1177% ( 126) 00:11:44.944 15728.640 - 15829.465: 59.5093% ( 114) 00:11:44.944 15829.465 - 15930.289: 60.8887% ( 113) 00:11:44.944 15930.289 - 16031.114: 62.7075% ( 149) 00:11:44.944 16031.114 - 16131.938: 64.3555% ( 135) 00:11:44.944 16131.938 - 16232.763: 66.0156% ( 136) 00:11:44.944 16232.763 - 16333.588: 67.3584% ( 110) 00:11:44.944 16333.588 - 16434.412: 68.8354% ( 121) 00:11:44.944 16434.412 - 16535.237: 70.1904% ( 111) 00:11:44.944 16535.237 - 16636.062: 71.5332% ( 110) 00:11:44.944 16636.062 - 16736.886: 73.0103% ( 121) 00:11:44.944 16736.886 - 16837.711: 74.5972% ( 130) 00:11:44.944 16837.711 - 16938.535: 76.0620% ( 120) 00:11:44.944 16938.535 - 17039.360: 77.4902% ( 117) 00:11:44.944 17039.360 - 17140.185: 78.9917% ( 123) 00:11:44.944 17140.185 - 17241.009: 80.0415% ( 86) 00:11:44.944 17241.009 - 17341.834: 80.9082% ( 71) 00:11:44.944 17341.834 - 17442.658: 81.7749% ( 71) 00:11:44.944 17442.658 - 17543.483: 82.6782% ( 74) 00:11:44.944 17543.483 - 17644.308: 83.5938% ( 75) 00:11:44.944 17644.308 - 17745.132: 84.4849% ( 73) 00:11:44.944 17745.132 - 17845.957: 85.3394% ( 70) 00:11:44.944 17845.957 - 17946.782: 86.2183% ( 72) 00:11:44.944 17946.782 - 18047.606: 87.1704% ( 78) 00:11:44.944 18047.606 - 18148.431: 88.0249% ( 70) 00:11:44.944 18148.431 - 18249.255: 88.8550% ( 68) 00:11:44.944 18249.255 - 18350.080: 89.5264% ( 55) 00:11:44.944 18350.080 - 18450.905: 90.2222% ( 57) 00:11:44.944 18450.905 - 18551.729: 90.7959% ( 47) 00:11:44.944 18551.729 - 18652.554: 91.3086% ( 42) 00:11:44.944 18652.554 - 18753.378: 91.7236% ( 34) 00:11:44.944 18753.378 - 18854.203: 92.1997% ( 39) 00:11:44.944 18854.203 - 18955.028: 92.5903% ( 32) 00:11:44.944 18955.028 - 19055.852: 92.9932% ( 33) 00:11:44.944 19055.852 - 19156.677: 93.3594% ( 30) 00:11:44.944 19156.677 - 19257.502: 93.6890% ( 27) 00:11:44.944 19257.502 - 19358.326: 93.9453% ( 21) 00:11:44.944 19358.326 - 19459.151: 94.1284% ( 15) 00:11:44.944 19459.151 - 19559.975: 94.2993% ( 14) 00:11:44.944 19559.975 - 19660.800: 94.4214% ( 10) 00:11:44.944 19660.800 - 19761.625: 94.5801% ( 13) 00:11:44.944 19761.625 - 19862.449: 94.7998% ( 18) 00:11:44.944 19862.449 - 19963.274: 95.0806% ( 23) 00:11:44.944 19963.274 - 20064.098: 95.3613% ( 23) 00:11:44.944 20064.098 - 20164.923: 95.7031% ( 28) 00:11:44.944 20164.923 - 20265.748: 96.0083% ( 25) 00:11:44.944 20265.748 - 20366.572: 96.3013% ( 24) 00:11:44.944 20366.572 - 20467.397: 96.5820% ( 23) 00:11:44.944 20467.397 - 20568.222: 96.9116% ( 27) 00:11:44.944 20568.222 - 20669.046: 97.1436% ( 19) 00:11:44.944 20669.046 - 20769.871: 97.3633% ( 18) 00:11:44.944 20769.871 - 20870.695: 97.5586% ( 16) 00:11:44.944 20870.695 - 20971.520: 97.7051% ( 12) 00:11:44.944 20971.520 - 21072.345: 97.8149% ( 9) 00:11:44.944 21072.345 - 21173.169: 97.9126% ( 8) 00:11:44.944 21173.169 - 21273.994: 98.0225% ( 9) 00:11:44.944 21273.994 - 21374.818: 98.1201% ( 8) 00:11:44.944 21374.818 - 21475.643: 98.2178% ( 8) 00:11:44.944 21475.643 - 21576.468: 98.2666% ( 4) 00:11:44.944 21576.468 - 21677.292: 98.3032% ( 3) 00:11:44.944 21677.292 - 21778.117: 98.3398% ( 3) 00:11:44.944 21778.117 - 21878.942: 98.3765% ( 3) 00:11:44.944 21878.942 - 21979.766: 98.4131% ( 3) 00:11:44.944 21979.766 - 22080.591: 98.4375% ( 2) 00:11:44.944 26012.751 - 26214.400: 98.4741% ( 3) 00:11:44.944 26214.400 - 26416.049: 98.5718% ( 8) 00:11:44.944 26416.049 - 26617.698: 98.6694% ( 8) 00:11:44.944 26617.698 - 26819.348: 98.7793% ( 9) 00:11:44.944 26819.348 - 27020.997: 98.8770% ( 8) 00:11:44.944 27020.997 - 27222.646: 98.9746% ( 8) 00:11:44.944 27222.646 - 27424.295: 99.0723% ( 8) 00:11:44.944 27424.295 - 27625.945: 99.1821% ( 9) 00:11:44.944 27625.945 - 27827.594: 99.2188% ( 3) 00:11:44.944 32667.175 - 32868.825: 99.3042% ( 7) 00:11:44.944 32868.825 - 33070.474: 99.4019% ( 8) 00:11:44.944 33070.474 - 33272.123: 99.4873% ( 7) 00:11:44.944 33272.123 - 33473.772: 99.5850% ( 8) 00:11:44.944 33473.772 - 33675.422: 99.6948% ( 9) 00:11:44.944 33675.422 - 33877.071: 99.7925% ( 8) 00:11:44.944 33877.071 - 34078.720: 99.9023% ( 9) 00:11:44.944 34078.720 - 34280.369: 100.0000% ( 8) 00:11:44.944 00:11:44.944 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:44.944 ============================================================================== 00:11:44.944 Range in us Cumulative IO count 00:11:44.944 10636.997 - 10687.409: 0.0242% ( 2) 00:11:44.944 10687.409 - 10737.822: 0.0484% ( 2) 00:11:44.944 10737.822 - 10788.234: 0.0969% ( 4) 00:11:44.944 10788.234 - 10838.646: 0.1575% ( 5) 00:11:44.944 10838.646 - 10889.058: 0.2422% ( 7) 00:11:44.944 10889.058 - 10939.471: 0.3270% ( 7) 00:11:44.944 10939.471 - 10989.883: 0.3997% ( 6) 00:11:44.944 10989.883 - 11040.295: 0.4966% ( 8) 00:11:44.944 11040.295 - 11090.708: 0.6177% ( 10) 00:11:44.944 11090.708 - 11141.120: 0.7389% ( 10) 00:11:44.944 11141.120 - 11191.532: 0.8358% ( 8) 00:11:44.944 11191.532 - 11241.945: 0.9448% ( 9) 00:11:44.944 11241.945 - 11292.357: 1.0296% ( 7) 00:11:44.944 11292.357 - 11342.769: 1.1386% ( 9) 00:11:44.944 11342.769 - 11393.182: 1.2718% ( 11) 00:11:44.944 11393.182 - 11443.594: 1.3687% ( 8) 00:11:44.944 11443.594 - 11494.006: 1.4777% ( 9) 00:11:44.944 11494.006 - 11544.418: 1.5504% ( 6) 00:11:44.944 11544.418 - 11594.831: 1.6715% ( 10) 00:11:44.944 11594.831 - 11645.243: 1.7805% ( 9) 00:11:44.944 11645.243 - 11695.655: 1.8774% ( 8) 00:11:44.944 11695.655 - 11746.068: 1.9743% ( 8) 00:11:44.944 11746.068 - 11796.480: 2.0954% ( 10) 00:11:44.944 11796.480 - 11846.892: 2.2408% ( 12) 00:11:44.944 11846.892 - 11897.305: 2.4467% ( 17) 00:11:44.944 11897.305 - 11947.717: 2.5921% ( 12) 00:11:44.944 11947.717 - 11998.129: 2.7616% ( 14) 00:11:44.944 11998.129 - 12048.542: 3.0887% ( 27) 00:11:44.944 12048.542 - 12098.954: 3.3430% ( 21) 00:11:44.944 12098.954 - 12149.366: 3.6701% ( 27) 00:11:44.944 12149.366 - 12199.778: 4.0577% ( 32) 00:11:44.944 12199.778 - 12250.191: 4.4331% ( 31) 00:11:44.944 12250.191 - 12300.603: 4.8934% ( 38) 00:11:44.944 12300.603 - 12351.015: 5.4142% ( 43) 00:11:44.944 12351.015 - 12401.428: 5.8987% ( 40) 00:11:44.944 12401.428 - 12451.840: 6.4801% ( 48) 00:11:44.944 12451.840 - 12502.252: 7.0010% ( 43) 00:11:44.944 12502.252 - 12552.665: 7.4976% ( 41) 00:11:44.944 12552.665 - 12603.077: 8.0911% ( 49) 00:11:44.944 12603.077 - 12653.489: 8.6361% ( 45) 00:11:44.944 12653.489 - 12703.902: 9.1933% ( 46) 00:11:44.944 12703.902 - 12754.314: 9.8474% ( 54) 00:11:44.944 12754.314 - 12804.726: 10.3682% ( 43) 00:11:44.944 12804.726 - 12855.138: 10.9738% ( 50) 00:11:44.944 12855.138 - 12905.551: 11.6037% ( 52) 00:11:44.944 12905.551 - 13006.375: 12.9118% ( 108) 00:11:44.944 13006.375 - 13107.200: 14.1231% ( 100) 00:11:44.944 13107.200 - 13208.025: 15.5160% ( 115) 00:11:44.944 13208.025 - 13308.849: 16.9453% ( 118) 00:11:44.944 13308.849 - 13409.674: 18.2171% ( 105) 00:11:44.944 13409.674 - 13510.498: 19.5858% ( 113) 00:11:44.944 13510.498 - 13611.323: 20.8939% ( 108) 00:11:44.944 13611.323 - 13712.148: 22.3353% ( 119) 00:11:44.944 13712.148 - 13812.972: 23.6797% ( 111) 00:11:44.944 13812.972 - 13913.797: 25.1090% ( 118) 00:11:44.944 13913.797 - 14014.622: 26.7200% ( 133) 00:11:44.944 14014.622 - 14115.446: 28.2703% ( 128) 00:11:44.944 14115.446 - 14216.271: 29.9176% ( 136) 00:11:44.944 14216.271 - 14317.095: 31.8314% ( 158) 00:11:44.944 14317.095 - 14417.920: 33.6967% ( 154) 00:11:44.944 14417.920 - 14518.745: 35.6105% ( 158) 00:11:44.944 14518.745 - 14619.569: 37.6575% ( 169) 00:11:44.944 14619.569 - 14720.394: 39.6439% ( 164) 00:11:44.945 14720.394 - 14821.218: 41.6061% ( 162) 00:11:44.945 14821.218 - 14922.043: 43.5562% ( 161) 00:11:44.945 14922.043 - 15022.868: 45.5184% ( 162) 00:11:44.945 15022.868 - 15123.692: 47.6260% ( 174) 00:11:44.945 15123.692 - 15224.517: 49.4186% ( 148) 00:11:44.945 15224.517 - 15325.342: 51.2112% ( 148) 00:11:44.945 15325.342 - 15426.166: 53.1129% ( 157) 00:11:44.945 15426.166 - 15526.991: 54.9297% ( 150) 00:11:44.945 15526.991 - 15627.815: 56.6376% ( 141) 00:11:44.945 15627.815 - 15728.640: 58.3454% ( 141) 00:11:44.945 15728.640 - 15829.465: 59.8353% ( 123) 00:11:44.945 15829.465 - 15930.289: 61.2040% ( 113) 00:11:44.945 15930.289 - 16031.114: 62.5000% ( 107) 00:11:44.945 16031.114 - 16131.938: 63.8445% ( 111) 00:11:44.945 16131.938 - 16232.763: 65.2980% ( 120) 00:11:44.945 16232.763 - 16333.588: 66.5940% ( 107) 00:11:44.945 16333.588 - 16434.412: 67.9748% ( 114) 00:11:44.945 16434.412 - 16535.237: 69.5010% ( 126) 00:11:44.945 16535.237 - 16636.062: 71.0392% ( 127) 00:11:44.945 16636.062 - 16736.886: 72.4685% ( 118) 00:11:44.945 16736.886 - 16837.711: 73.7524% ( 106) 00:11:44.945 16837.711 - 16938.535: 75.0606% ( 108) 00:11:44.945 16938.535 - 17039.360: 76.2234% ( 96) 00:11:44.945 17039.360 - 17140.185: 77.2045% ( 81) 00:11:44.945 17140.185 - 17241.009: 78.3915% ( 98) 00:11:44.945 17241.009 - 17341.834: 79.5422% ( 95) 00:11:44.945 17341.834 - 17442.658: 80.5717% ( 85) 00:11:44.945 17442.658 - 17543.483: 81.7587% ( 98) 00:11:44.945 17543.483 - 17644.308: 82.9700% ( 100) 00:11:44.945 17644.308 - 17745.132: 84.0237% ( 87) 00:11:44.945 17745.132 - 17845.957: 85.0412% ( 84) 00:11:44.945 17845.957 - 17946.782: 86.0707% ( 85) 00:11:44.945 17946.782 - 18047.606: 87.2214% ( 95) 00:11:44.945 18047.606 - 18148.431: 88.3479% ( 93) 00:11:44.945 18148.431 - 18249.255: 89.2805% ( 77) 00:11:44.945 18249.255 - 18350.080: 90.1647% ( 73) 00:11:44.945 18350.080 - 18450.905: 90.9278% ( 63) 00:11:44.945 18450.905 - 18551.729: 91.4123% ( 40) 00:11:44.945 18551.729 - 18652.554: 91.8120% ( 33) 00:11:44.945 18652.554 - 18753.378: 92.2481% ( 36) 00:11:44.945 18753.378 - 18854.203: 92.6478% ( 33) 00:11:44.945 18854.203 - 18955.028: 93.0838% ( 36) 00:11:44.945 18955.028 - 19055.852: 93.5320% ( 37) 00:11:44.945 19055.852 - 19156.677: 93.8711% ( 28) 00:11:44.945 19156.677 - 19257.502: 94.2224% ( 29) 00:11:44.945 19257.502 - 19358.326: 94.5858% ( 30) 00:11:44.945 19358.326 - 19459.151: 94.9249% ( 28) 00:11:44.945 19459.151 - 19559.975: 95.2035% ( 23) 00:11:44.945 19559.975 - 19660.800: 95.4457% ( 20) 00:11:44.945 19660.800 - 19761.625: 95.7122% ( 22) 00:11:44.945 19761.625 - 19862.449: 95.9545% ( 20) 00:11:44.945 19862.449 - 19963.274: 96.2330% ( 23) 00:11:44.945 19963.274 - 20064.098: 96.5359% ( 25) 00:11:44.945 20064.098 - 20164.923: 96.8144% ( 23) 00:11:44.945 20164.923 - 20265.748: 97.0567% ( 20) 00:11:44.945 20265.748 - 20366.572: 97.3110% ( 21) 00:11:44.945 20366.572 - 20467.397: 97.5291% ( 18) 00:11:44.945 20467.397 - 20568.222: 97.7350% ( 17) 00:11:44.945 20568.222 - 20669.046: 97.9288% ( 16) 00:11:44.945 20669.046 - 20769.871: 98.0984% ( 14) 00:11:44.945 20769.871 - 20870.695: 98.3164% ( 18) 00:11:44.945 20870.695 - 20971.520: 98.5102% ( 16) 00:11:44.945 20971.520 - 21072.345: 98.6676% ( 13) 00:11:44.945 21072.345 - 21173.169: 98.7888% ( 10) 00:11:44.945 21173.169 - 21273.994: 98.9220% ( 11) 00:11:44.945 21273.994 - 21374.818: 99.0310% ( 9) 00:11:44.945 21374.818 - 21475.643: 99.1158% ( 7) 00:11:44.945 21475.643 - 21576.468: 99.1885% ( 6) 00:11:44.945 21576.468 - 21677.292: 99.2248% ( 3) 00:11:44.945 25811.102 - 26012.751: 99.2854% ( 5) 00:11:44.945 26012.751 - 26214.400: 99.3944% ( 9) 00:11:44.945 26214.400 - 26416.049: 99.4913% ( 8) 00:11:44.945 26416.049 - 26617.698: 99.5882% ( 8) 00:11:44.945 26617.698 - 26819.348: 99.6851% ( 8) 00:11:44.945 26819.348 - 27020.997: 99.7941% ( 9) 00:11:44.945 27020.997 - 27222.646: 99.8910% ( 8) 00:11:44.945 27222.646 - 27424.295: 100.0000% ( 9) 00:11:44.945 00:11:44.945 13:31:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:46.357 Initializing NVMe Controllers 00:11:46.357 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:46.357 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:46.357 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:46.357 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:46.357 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:46.357 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:46.357 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:46.357 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:46.357 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:46.357 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:46.357 Initialization complete. Launching workers. 00:11:46.357 ======================================================== 00:11:46.357 Latency(us) 00:11:46.357 Device Information : IOPS MiB/s Average min max 00:11:46.357 PCIE (0000:00:11.0) NSID 1 from core 0: 9324.89 109.28 13749.45 9818.37 39901.20 00:11:46.357 PCIE (0000:00:13.0) NSID 1 from core 0: 9324.89 109.28 13729.10 9663.25 38703.53 00:11:46.357 PCIE (0000:00:10.0) NSID 1 from core 0: 9324.89 109.28 13706.37 9571.25 37791.01 00:11:46.357 PCIE (0000:00:12.0) NSID 1 from core 0: 9324.89 109.28 13684.01 9602.59 36837.96 00:11:46.357 PCIE (0000:00:12.0) NSID 2 from core 0: 9324.89 109.28 13662.96 9452.35 37260.47 00:11:46.357 PCIE (0000:00:12.0) NSID 3 from core 0: 9388.76 110.02 13549.48 9180.46 26428.22 00:11:46.357 ======================================================== 00:11:46.357 Total : 56013.23 656.41 13680.08 9180.46 39901.20 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 10132.874us 00:11:46.357 10.00000% : 11040.295us 00:11:46.357 25.00000% : 11947.717us 00:11:46.357 50.00000% : 13510.498us 00:11:46.357 75.00000% : 14821.218us 00:11:46.357 90.00000% : 16232.763us 00:11:46.357 95.00000% : 17039.360us 00:11:46.357 98.00000% : 18854.203us 00:11:46.357 99.00000% : 28835.840us 00:11:46.357 99.50000% : 37910.055us 00:11:46.357 99.90000% : 39724.898us 00:11:46.357 99.99000% : 39926.548us 00:11:46.357 99.99900% : 39926.548us 00:11:46.357 99.99990% : 39926.548us 00:11:46.357 99.99999% : 39926.548us 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 10082.462us 00:11:46.357 10.00000% : 11141.120us 00:11:46.357 25.00000% : 11947.717us 00:11:46.357 50.00000% : 13510.498us 00:11:46.357 75.00000% : 14821.218us 00:11:46.357 90.00000% : 16232.763us 00:11:46.357 95.00000% : 17039.360us 00:11:46.357 98.00000% : 18249.255us 00:11:46.357 99.00000% : 27827.594us 00:11:46.357 99.50000% : 37305.108us 00:11:46.357 99.90000% : 38515.003us 00:11:46.357 99.99000% : 38716.652us 00:11:46.357 99.99900% : 38716.652us 00:11:46.357 99.99990% : 38716.652us 00:11:46.357 99.99999% : 38716.652us 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 10032.049us 00:11:46.357 10.00000% : 11040.295us 00:11:46.357 25.00000% : 11998.129us 00:11:46.357 50.00000% : 13510.498us 00:11:46.357 75.00000% : 14922.043us 00:11:46.357 90.00000% : 16232.763us 00:11:46.357 95.00000% : 17039.360us 00:11:46.357 98.00000% : 17946.782us 00:11:46.357 99.00000% : 26819.348us 00:11:46.357 99.50000% : 36700.160us 00:11:46.357 99.90000% : 37708.406us 00:11:46.357 99.99000% : 37910.055us 00:11:46.357 99.99900% : 37910.055us 00:11:46.357 99.99990% : 37910.055us 00:11:46.357 99.99999% : 37910.055us 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 10183.286us 00:11:46.357 10.00000% : 11040.295us 00:11:46.357 25.00000% : 11897.305us 00:11:46.357 50.00000% : 13611.323us 00:11:46.357 75.00000% : 14821.218us 00:11:46.357 90.00000% : 16232.763us 00:11:46.357 95.00000% : 16938.535us 00:11:46.357 98.00000% : 18148.431us 00:11:46.357 99.00000% : 25811.102us 00:11:46.357 99.50000% : 35691.914us 00:11:46.357 99.90000% : 36700.160us 00:11:46.357 99.99000% : 36901.809us 00:11:46.357 99.99900% : 36901.809us 00:11:46.357 99.99990% : 36901.809us 00:11:46.357 99.99999% : 36901.809us 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 9880.812us 00:11:46.357 10.00000% : 11090.708us 00:11:46.357 25.00000% : 11897.305us 00:11:46.357 50.00000% : 13510.498us 00:11:46.357 75.00000% : 14821.218us 00:11:46.357 90.00000% : 16131.938us 00:11:46.357 95.00000% : 17241.009us 00:11:46.357 98.00000% : 18350.080us 00:11:46.357 99.00000% : 25710.277us 00:11:46.357 99.50000% : 36095.212us 00:11:46.357 99.90000% : 37103.458us 00:11:46.357 99.99000% : 37305.108us 00:11:46.357 99.99900% : 37305.108us 00:11:46.357 99.99990% : 37305.108us 00:11:46.357 99.99999% : 37305.108us 00:11:46.357 00:11:46.357 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:46.357 ================================================================================= 00:11:46.357 1.00000% : 9880.812us 00:11:46.357 10.00000% : 11040.295us 00:11:46.357 25.00000% : 11947.717us 00:11:46.357 50.00000% : 13510.498us 00:11:46.357 75.00000% : 14720.394us 00:11:46.357 90.00000% : 15930.289us 00:11:46.357 95.00000% : 17039.360us 00:11:46.357 98.00000% : 18047.606us 00:11:46.357 99.00000% : 18854.203us 00:11:46.357 99.50000% : 25306.978us 00:11:46.357 99.90000% : 26214.400us 00:11:46.357 99.99000% : 26617.698us 00:11:46.357 99.99900% : 26617.698us 00:11:46.357 99.99990% : 26617.698us 00:11:46.357 99.99999% : 26617.698us 00:11:46.357 00:11:46.357 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:46.357 ============================================================================== 00:11:46.357 Range in us Cumulative IO count 00:11:46.357 9779.988 - 9830.400: 0.0214% ( 2) 00:11:46.357 9830.400 - 9880.812: 0.0642% ( 4) 00:11:46.357 9880.812 - 9931.225: 0.1819% ( 11) 00:11:46.357 9931.225 - 9981.637: 0.3104% ( 12) 00:11:46.357 9981.637 - 10032.049: 0.4709% ( 15) 00:11:46.357 10032.049 - 10082.462: 0.9097% ( 41) 00:11:46.357 10082.462 - 10132.874: 1.1665% ( 24) 00:11:46.357 10132.874 - 10183.286: 1.3913% ( 21) 00:11:46.357 10183.286 - 10233.698: 2.0013% ( 57) 00:11:46.357 10233.698 - 10284.111: 2.2688% ( 25) 00:11:46.357 10284.111 - 10334.523: 2.5578% ( 27) 00:11:46.357 10334.523 - 10384.935: 2.9110% ( 33) 00:11:46.357 10384.935 - 10435.348: 3.3283% ( 39) 00:11:46.357 10435.348 - 10485.760: 3.7029% ( 35) 00:11:46.357 10485.760 - 10536.172: 4.1417% ( 41) 00:11:46.357 10536.172 - 10586.585: 4.5912% ( 42) 00:11:46.357 10586.585 - 10636.997: 5.0835% ( 46) 00:11:46.357 10636.997 - 10687.409: 5.5009% ( 39) 00:11:46.357 10687.409 - 10737.822: 6.1216% ( 58) 00:11:46.357 10737.822 - 10788.234: 6.7316% ( 57) 00:11:46.357 10788.234 - 10838.646: 7.3737% ( 60) 00:11:46.357 10838.646 - 10889.058: 8.0586% ( 64) 00:11:46.357 10889.058 - 10939.471: 8.8613% ( 75) 00:11:46.357 10939.471 - 10989.883: 9.6211% ( 71) 00:11:46.357 10989.883 - 11040.295: 10.3382% ( 67) 00:11:46.357 11040.295 - 11090.708: 11.1515% ( 76) 00:11:46.357 11090.708 - 11141.120: 12.0719% ( 86) 00:11:46.357 11141.120 - 11191.532: 12.9281% ( 80) 00:11:46.357 11191.532 - 11241.945: 13.5809% ( 61) 00:11:46.357 11241.945 - 11292.357: 14.2551% ( 63) 00:11:46.357 11292.357 - 11342.769: 15.0578% ( 75) 00:11:46.357 11342.769 - 11393.182: 15.6999% ( 60) 00:11:46.357 11393.182 - 11443.594: 16.6203% ( 86) 00:11:46.357 11443.594 - 11494.006: 17.3694% ( 70) 00:11:46.357 11494.006 - 11544.418: 18.0865% ( 67) 00:11:46.357 11544.418 - 11594.831: 18.9640% ( 82) 00:11:46.357 11594.831 - 11645.243: 19.9486% ( 92) 00:11:46.357 11645.243 - 11695.655: 21.0938% ( 107) 00:11:46.357 11695.655 - 11746.068: 21.8857% ( 74) 00:11:46.357 11746.068 - 11796.480: 22.7098% ( 77) 00:11:46.357 11796.480 - 11846.892: 23.6943% ( 92) 00:11:46.357 11846.892 - 11897.305: 24.5933% ( 84) 00:11:46.357 11897.305 - 11947.717: 25.7063% ( 104) 00:11:46.357 11947.717 - 11998.129: 26.6374% ( 87) 00:11:46.357 11998.129 - 12048.542: 27.3759% ( 69) 00:11:46.357 12048.542 - 12098.954: 28.0929% ( 67) 00:11:46.357 12098.954 - 12149.366: 28.6387% ( 51) 00:11:46.357 12149.366 - 12199.778: 29.1952% ( 52) 00:11:46.357 12199.778 - 12250.191: 29.9658% ( 72) 00:11:46.357 12250.191 - 12300.603: 30.3510% ( 36) 00:11:46.357 12300.603 - 12351.015: 30.6186% ( 25) 00:11:46.357 12351.015 - 12401.428: 30.8540% ( 22) 00:11:46.357 12401.428 - 12451.840: 31.0681% ( 20) 00:11:46.357 12451.840 - 12502.252: 31.4105% ( 32) 00:11:46.357 12502.252 - 12552.665: 31.7423% ( 31) 00:11:46.357 12552.665 - 12603.077: 32.1169% ( 35) 00:11:46.357 12603.077 - 12653.489: 32.5985% ( 45) 00:11:46.357 12653.489 - 12703.902: 33.0908% ( 46) 00:11:46.357 12703.902 - 12754.314: 33.6687% ( 54) 00:11:46.358 12754.314 - 12804.726: 34.3536% ( 64) 00:11:46.358 12804.726 - 12855.138: 35.2312% ( 82) 00:11:46.358 12855.138 - 12905.551: 36.4191% ( 111) 00:11:46.358 12905.551 - 13006.375: 38.4525% ( 190) 00:11:46.358 13006.375 - 13107.200: 40.4003% ( 182) 00:11:46.358 13107.200 - 13208.025: 42.4551% ( 192) 00:11:46.358 13208.025 - 13308.849: 45.5693% ( 291) 00:11:46.358 13308.849 - 13409.674: 48.0201% ( 229) 00:11:46.358 13409.674 - 13510.498: 50.3425% ( 217) 00:11:46.358 13510.498 - 13611.323: 53.2427% ( 271) 00:11:46.358 13611.323 - 13712.148: 55.6293% ( 223) 00:11:46.358 13712.148 - 13812.972: 57.5771% ( 182) 00:11:46.358 13812.972 - 13913.797: 59.7282% ( 201) 00:11:46.358 13913.797 - 14014.622: 61.5261% ( 168) 00:11:46.358 14014.622 - 14115.446: 63.5702% ( 191) 00:11:46.358 14115.446 - 14216.271: 65.5501% ( 185) 00:11:46.358 14216.271 - 14317.095: 67.5835% ( 190) 00:11:46.358 14317.095 - 14417.920: 69.3921% ( 169) 00:11:46.358 14417.920 - 14518.745: 71.2329% ( 172) 00:11:46.358 14518.745 - 14619.569: 72.8275% ( 149) 00:11:46.358 14619.569 - 14720.394: 74.3365% ( 141) 00:11:46.358 14720.394 - 14821.218: 75.6956% ( 127) 00:11:46.358 14821.218 - 14922.043: 77.2795% ( 148) 00:11:46.358 14922.043 - 15022.868: 78.6601% ( 129) 00:11:46.358 15022.868 - 15123.692: 79.9015% ( 116) 00:11:46.358 15123.692 - 15224.517: 81.2393% ( 125) 00:11:46.358 15224.517 - 15325.342: 82.6199% ( 129) 00:11:46.358 15325.342 - 15426.166: 83.6366% ( 95) 00:11:46.358 15426.166 - 15526.991: 84.6533% ( 95) 00:11:46.358 15526.991 - 15627.815: 85.8198% ( 109) 00:11:46.358 15627.815 - 15728.640: 86.8365% ( 95) 00:11:46.358 15728.640 - 15829.465: 87.7461% ( 85) 00:11:46.358 15829.465 - 15930.289: 88.4525% ( 66) 00:11:46.358 15930.289 - 16031.114: 89.0518% ( 56) 00:11:46.358 16031.114 - 16131.938: 89.6725% ( 58) 00:11:46.358 16131.938 - 16232.763: 90.3789% ( 66) 00:11:46.358 16232.763 - 16333.588: 90.9996% ( 58) 00:11:46.358 16333.588 - 16434.412: 91.5989% ( 56) 00:11:46.358 16434.412 - 16535.237: 92.5086% ( 85) 00:11:46.358 16535.237 - 16636.062: 93.3540% ( 79) 00:11:46.358 16636.062 - 16736.886: 93.8998% ( 51) 00:11:46.358 16736.886 - 16837.711: 94.4777% ( 54) 00:11:46.358 16837.711 - 16938.535: 94.9379% ( 43) 00:11:46.358 16938.535 - 17039.360: 95.3232% ( 36) 00:11:46.358 17039.360 - 17140.185: 95.7513% ( 40) 00:11:46.358 17140.185 - 17241.009: 96.1366% ( 36) 00:11:46.358 17241.009 - 17341.834: 96.4148% ( 26) 00:11:46.358 17341.834 - 17442.658: 96.6717% ( 24) 00:11:46.358 17442.658 - 17543.483: 96.9285% ( 24) 00:11:46.358 17543.483 - 17644.308: 97.1854% ( 24) 00:11:46.358 17644.308 - 17745.132: 97.3994% ( 20) 00:11:46.358 17745.132 - 17845.957: 97.5278% ( 12) 00:11:46.358 17845.957 - 17946.782: 97.6241% ( 9) 00:11:46.358 17946.782 - 18047.606: 97.7954% ( 16) 00:11:46.358 18047.606 - 18148.431: 97.8596% ( 6) 00:11:46.358 18148.431 - 18249.255: 97.9131% ( 5) 00:11:46.358 18249.255 - 18350.080: 97.9452% ( 3) 00:11:46.358 18551.729 - 18652.554: 97.9559% ( 1) 00:11:46.358 18652.554 - 18753.378: 97.9987% ( 4) 00:11:46.358 18753.378 - 18854.203: 98.0629% ( 6) 00:11:46.358 18854.203 - 18955.028: 98.0950% ( 3) 00:11:46.358 18955.028 - 19055.852: 98.1485% ( 5) 00:11:46.358 19055.852 - 19156.677: 98.1914% ( 4) 00:11:46.358 19156.677 - 19257.502: 98.2342% ( 4) 00:11:46.358 19257.502 - 19358.326: 98.2663% ( 3) 00:11:46.358 19358.326 - 19459.151: 98.3091% ( 4) 00:11:46.358 19459.151 - 19559.975: 98.3626% ( 5) 00:11:46.358 19559.975 - 19660.800: 98.4054% ( 4) 00:11:46.358 19660.800 - 19761.625: 98.4589% ( 5) 00:11:46.358 19761.625 - 19862.449: 98.5124% ( 5) 00:11:46.358 19862.449 - 19963.274: 98.5766% ( 6) 00:11:46.358 19963.274 - 20064.098: 98.6301% ( 5) 00:11:46.358 27827.594 - 28029.243: 98.6408% ( 1) 00:11:46.358 28029.243 - 28230.892: 98.7051% ( 6) 00:11:46.358 28230.892 - 28432.542: 98.8870% ( 17) 00:11:46.358 28432.542 - 28634.191: 98.9833% ( 9) 00:11:46.358 28634.191 - 28835.840: 99.0582% ( 7) 00:11:46.358 28835.840 - 29037.489: 99.1438% ( 8) 00:11:46.358 29037.489 - 29239.138: 99.2295% ( 8) 00:11:46.358 29239.138 - 29440.788: 99.2937% ( 6) 00:11:46.358 29440.788 - 29642.437: 99.3151% ( 2) 00:11:46.358 37103.458 - 37305.108: 99.3472% ( 3) 00:11:46.358 37305.108 - 37506.757: 99.3900% ( 4) 00:11:46.358 37506.757 - 37708.406: 99.4328% ( 4) 00:11:46.358 37708.406 - 37910.055: 99.5077% ( 7) 00:11:46.358 38111.705 - 38313.354: 99.5505% ( 4) 00:11:46.358 38313.354 - 38515.003: 99.6040% ( 5) 00:11:46.358 38515.003 - 38716.652: 99.6575% ( 5) 00:11:46.358 38716.652 - 38918.302: 99.7110% ( 5) 00:11:46.358 38918.302 - 39119.951: 99.7646% ( 5) 00:11:46.358 39119.951 - 39321.600: 99.8181% ( 5) 00:11:46.358 39321.600 - 39523.249: 99.8823% ( 6) 00:11:46.358 39523.249 - 39724.898: 99.9358% ( 5) 00:11:46.358 39724.898 - 39926.548: 100.0000% ( 6) 00:11:46.358 00:11:46.358 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:46.358 ============================================================================== 00:11:46.358 Range in us Cumulative IO count 00:11:46.358 9628.751 - 9679.163: 0.0214% ( 2) 00:11:46.358 9679.163 - 9729.575: 0.0749% ( 5) 00:11:46.358 9729.575 - 9779.988: 0.1498% ( 7) 00:11:46.358 9779.988 - 9830.400: 0.2140% ( 6) 00:11:46.358 9830.400 - 9880.812: 0.3104% ( 9) 00:11:46.358 9880.812 - 9931.225: 0.4602% ( 14) 00:11:46.358 9931.225 - 9981.637: 0.6314% ( 16) 00:11:46.358 9981.637 - 10032.049: 0.8455% ( 20) 00:11:46.358 10032.049 - 10082.462: 1.0702% ( 21) 00:11:46.358 10082.462 - 10132.874: 1.3592% ( 27) 00:11:46.358 10132.874 - 10183.286: 1.7551% ( 37) 00:11:46.358 10183.286 - 10233.698: 2.1725% ( 39) 00:11:46.358 10233.698 - 10284.111: 2.5043% ( 31) 00:11:46.358 10284.111 - 10334.523: 2.8360% ( 31) 00:11:46.358 10334.523 - 10384.935: 3.3497% ( 48) 00:11:46.358 10384.935 - 10435.348: 4.1096% ( 71) 00:11:46.358 10435.348 - 10485.760: 4.6447% ( 50) 00:11:46.358 10485.760 - 10536.172: 5.1049% ( 43) 00:11:46.358 10536.172 - 10586.585: 5.4580% ( 33) 00:11:46.358 10586.585 - 10636.997: 5.8540% ( 37) 00:11:46.358 10636.997 - 10687.409: 6.1965% ( 32) 00:11:46.358 10687.409 - 10737.822: 6.8065% ( 57) 00:11:46.358 10737.822 - 10788.234: 7.1597% ( 33) 00:11:46.358 10788.234 - 10838.646: 7.4486% ( 27) 00:11:46.358 10838.646 - 10889.058: 7.8446% ( 37) 00:11:46.358 10889.058 - 10939.471: 8.2620% ( 39) 00:11:46.358 10939.471 - 10989.883: 8.6901% ( 40) 00:11:46.358 10989.883 - 11040.295: 9.1610% ( 44) 00:11:46.358 11040.295 - 11090.708: 9.8994% ( 69) 00:11:46.358 11090.708 - 11141.120: 10.5415% ( 60) 00:11:46.358 11141.120 - 11191.532: 11.1622% ( 58) 00:11:46.358 11191.532 - 11241.945: 11.7080% ( 51) 00:11:46.358 11241.945 - 11292.357: 12.3288% ( 58) 00:11:46.358 11292.357 - 11342.769: 13.2277% ( 84) 00:11:46.358 11342.769 - 11393.182: 14.1160% ( 83) 00:11:46.358 11393.182 - 11443.594: 15.2611% ( 107) 00:11:46.358 11443.594 - 11494.006: 16.3206% ( 99) 00:11:46.358 11494.006 - 11544.418: 17.6477% ( 124) 00:11:46.358 11544.418 - 11594.831: 18.7821% ( 106) 00:11:46.358 11594.831 - 11645.243: 19.6169% ( 78) 00:11:46.358 11645.243 - 11695.655: 20.7192% ( 103) 00:11:46.358 11695.655 - 11746.068: 21.7359% ( 95) 00:11:46.358 11746.068 - 11796.480: 22.7526% ( 95) 00:11:46.358 11796.480 - 11846.892: 23.5124% ( 71) 00:11:46.358 11846.892 - 11897.305: 24.2830% ( 72) 00:11:46.358 11897.305 - 11947.717: 25.2461% ( 90) 00:11:46.358 11947.717 - 11998.129: 26.1558% ( 85) 00:11:46.358 11998.129 - 12048.542: 26.7337% ( 54) 00:11:46.358 12048.542 - 12098.954: 27.1511% ( 39) 00:11:46.358 12098.954 - 12149.366: 27.5578% ( 38) 00:11:46.358 12149.366 - 12199.778: 28.0180% ( 43) 00:11:46.358 12199.778 - 12250.191: 28.6066% ( 55) 00:11:46.358 12250.191 - 12300.603: 29.0347% ( 40) 00:11:46.358 12300.603 - 12351.015: 29.5698% ( 50) 00:11:46.358 12351.015 - 12401.428: 30.0835% ( 48) 00:11:46.358 12401.428 - 12451.840: 30.6079% ( 49) 00:11:46.358 12451.840 - 12502.252: 31.0788% ( 44) 00:11:46.358 12502.252 - 12552.665: 31.4961% ( 39) 00:11:46.358 12552.665 - 12603.077: 32.0741% ( 54) 00:11:46.358 12603.077 - 12653.489: 32.9088% ( 78) 00:11:46.358 12653.489 - 12703.902: 33.7008% ( 74) 00:11:46.358 12703.902 - 12754.314: 34.6211% ( 86) 00:11:46.358 12754.314 - 12804.726: 35.5950% ( 91) 00:11:46.358 12804.726 - 12855.138: 36.3335% ( 69) 00:11:46.358 12855.138 - 12905.551: 37.3930% ( 99) 00:11:46.358 12905.551 - 13006.375: 40.0792% ( 251) 00:11:46.358 13006.375 - 13107.200: 42.4551% ( 222) 00:11:46.358 13107.200 - 13208.025: 44.7988% ( 219) 00:11:46.358 13208.025 - 13308.849: 46.8964% ( 196) 00:11:46.358 13308.849 - 13409.674: 48.8121% ( 179) 00:11:46.358 13409.674 - 13510.498: 51.6909% ( 269) 00:11:46.358 13510.498 - 13611.323: 54.1203% ( 227) 00:11:46.358 13611.323 - 13712.148: 56.7102% ( 242) 00:11:46.358 13712.148 - 13812.972: 59.0967% ( 223) 00:11:46.358 13812.972 - 13913.797: 61.4191% ( 217) 00:11:46.358 13913.797 - 14014.622: 63.1528% ( 162) 00:11:46.358 14014.622 - 14115.446: 64.9187% ( 165) 00:11:46.358 14115.446 - 14216.271: 66.6738% ( 164) 00:11:46.358 14216.271 - 14317.095: 68.4610% ( 167) 00:11:46.358 14317.095 - 14417.920: 69.8951% ( 134) 00:11:46.358 14417.920 - 14518.745: 71.3720% ( 138) 00:11:46.358 14518.745 - 14619.569: 72.7847% ( 132) 00:11:46.358 14619.569 - 14720.394: 73.9833% ( 112) 00:11:46.358 14720.394 - 14821.218: 75.0428% ( 99) 00:11:46.358 14821.218 - 14922.043: 76.0809% ( 97) 00:11:46.358 14922.043 - 15022.868: 77.3759% ( 121) 00:11:46.358 15022.868 - 15123.692: 78.8848% ( 141) 00:11:46.358 15123.692 - 15224.517: 80.0407% ( 108) 00:11:46.358 15224.517 - 15325.342: 81.2072% ( 109) 00:11:46.358 15325.342 - 15426.166: 82.1383% ( 87) 00:11:46.359 15426.166 - 15526.991: 83.2299% ( 102) 00:11:46.359 15526.991 - 15627.815: 84.2145% ( 92) 00:11:46.359 15627.815 - 15728.640: 85.1348% ( 86) 00:11:46.359 15728.640 - 15829.465: 86.3228% ( 111) 00:11:46.359 15829.465 - 15930.289: 87.2646% ( 88) 00:11:46.359 15930.289 - 16031.114: 88.2063% ( 88) 00:11:46.359 16031.114 - 16131.938: 89.5120% ( 122) 00:11:46.359 16131.938 - 16232.763: 90.3682% ( 80) 00:11:46.359 16232.763 - 16333.588: 91.2029% ( 78) 00:11:46.359 16333.588 - 16434.412: 91.7808% ( 54) 00:11:46.359 16434.412 - 16535.237: 92.4443% ( 62) 00:11:46.359 16535.237 - 16636.062: 93.0865% ( 60) 00:11:46.359 16636.062 - 16736.886: 93.7179% ( 59) 00:11:46.359 16736.886 - 16837.711: 94.3493% ( 59) 00:11:46.359 16837.711 - 16938.535: 94.8523% ( 47) 00:11:46.359 16938.535 - 17039.360: 95.4623% ( 57) 00:11:46.359 17039.360 - 17140.185: 96.3934% ( 87) 00:11:46.359 17140.185 - 17241.009: 96.9178% ( 49) 00:11:46.359 17241.009 - 17341.834: 97.2924% ( 35) 00:11:46.359 17341.834 - 17442.658: 97.5385% ( 23) 00:11:46.359 17442.658 - 17543.483: 97.7740% ( 22) 00:11:46.359 17543.483 - 17644.308: 97.9131% ( 13) 00:11:46.359 17644.308 - 17745.132: 97.9345% ( 2) 00:11:46.359 17745.132 - 17845.957: 97.9452% ( 1) 00:11:46.359 18148.431 - 18249.255: 98.0094% ( 6) 00:11:46.359 18249.255 - 18350.080: 98.0950% ( 8) 00:11:46.359 18350.080 - 18450.905: 98.1378% ( 4) 00:11:46.359 18450.905 - 18551.729: 98.1914% ( 5) 00:11:46.359 18551.729 - 18652.554: 98.2449% ( 5) 00:11:46.359 18652.554 - 18753.378: 98.2984% ( 5) 00:11:46.359 18753.378 - 18854.203: 98.3412% ( 4) 00:11:46.359 18854.203 - 18955.028: 98.3947% ( 5) 00:11:46.359 18955.028 - 19055.852: 98.4482% ( 5) 00:11:46.359 19055.852 - 19156.677: 98.5017% ( 5) 00:11:46.359 19156.677 - 19257.502: 98.5552% ( 5) 00:11:46.359 19257.502 - 19358.326: 98.6087% ( 5) 00:11:46.359 19358.326 - 19459.151: 98.6301% ( 2) 00:11:46.359 26819.348 - 27020.997: 98.7051% ( 7) 00:11:46.359 27020.997 - 27222.646: 98.7907% ( 8) 00:11:46.359 27222.646 - 27424.295: 98.8656% ( 7) 00:11:46.359 27424.295 - 27625.945: 98.9512% ( 8) 00:11:46.359 27625.945 - 27827.594: 99.0475% ( 9) 00:11:46.359 27827.594 - 28029.243: 99.1331% ( 8) 00:11:46.359 28029.243 - 28230.892: 99.2295% ( 9) 00:11:46.359 28230.892 - 28432.542: 99.3151% ( 8) 00:11:46.359 36700.160 - 36901.809: 99.3900% ( 7) 00:11:46.359 36901.809 - 37103.458: 99.4435% ( 5) 00:11:46.359 37103.458 - 37305.108: 99.5184% ( 7) 00:11:46.359 37305.108 - 37506.757: 99.5826% ( 6) 00:11:46.359 37506.757 - 37708.406: 99.6575% ( 7) 00:11:46.359 37708.406 - 37910.055: 99.7217% ( 6) 00:11:46.359 37910.055 - 38111.705: 99.7967% ( 7) 00:11:46.359 38111.705 - 38313.354: 99.8609% ( 6) 00:11:46.359 38313.354 - 38515.003: 99.9358% ( 7) 00:11:46.359 38515.003 - 38716.652: 100.0000% ( 6) 00:11:46.359 00:11:46.359 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:46.359 ============================================================================== 00:11:46.359 Range in us Cumulative IO count 00:11:46.359 9527.926 - 9578.338: 0.0107% ( 1) 00:11:46.359 9578.338 - 9628.751: 0.0321% ( 2) 00:11:46.359 9628.751 - 9679.163: 0.0535% ( 2) 00:11:46.359 9679.163 - 9729.575: 0.0963% ( 4) 00:11:46.359 9729.575 - 9779.988: 0.2033% ( 10) 00:11:46.359 9779.988 - 9830.400: 0.2997% ( 9) 00:11:46.359 9830.400 - 9880.812: 0.4602% ( 15) 00:11:46.359 9880.812 - 9931.225: 0.6742% ( 20) 00:11:46.359 9931.225 - 9981.637: 0.8990% ( 21) 00:11:46.359 9981.637 - 10032.049: 1.1344% ( 22) 00:11:46.359 10032.049 - 10082.462: 1.3592% ( 21) 00:11:46.359 10082.462 - 10132.874: 1.7123% ( 33) 00:11:46.359 10132.874 - 10183.286: 2.0548% ( 32) 00:11:46.359 10183.286 - 10233.698: 2.3438% ( 27) 00:11:46.359 10233.698 - 10284.111: 2.7290% ( 36) 00:11:46.359 10284.111 - 10334.523: 3.0822% ( 33) 00:11:46.359 10334.523 - 10384.935: 3.4675% ( 36) 00:11:46.359 10384.935 - 10435.348: 4.0561% ( 55) 00:11:46.359 10435.348 - 10485.760: 4.4092% ( 33) 00:11:46.359 10485.760 - 10536.172: 4.8587% ( 42) 00:11:46.359 10536.172 - 10586.585: 5.2868% ( 40) 00:11:46.359 10586.585 - 10636.997: 5.5758% ( 27) 00:11:46.359 10636.997 - 10687.409: 6.0360% ( 43) 00:11:46.359 10687.409 - 10737.822: 6.5604% ( 49) 00:11:46.359 10737.822 - 10788.234: 7.1704% ( 57) 00:11:46.359 10788.234 - 10838.646: 7.7911% ( 58) 00:11:46.359 10838.646 - 10889.058: 8.1229% ( 31) 00:11:46.359 10889.058 - 10939.471: 8.6687% ( 51) 00:11:46.359 10939.471 - 10989.883: 9.4927% ( 77) 00:11:46.359 10989.883 - 11040.295: 10.1562% ( 62) 00:11:46.359 11040.295 - 11090.708: 10.7877% ( 59) 00:11:46.359 11090.708 - 11141.120: 11.4940% ( 66) 00:11:46.359 11141.120 - 11191.532: 12.0826% ( 55) 00:11:46.359 11191.532 - 11241.945: 12.8853% ( 75) 00:11:46.359 11241.945 - 11292.357: 13.7414% ( 80) 00:11:46.359 11292.357 - 11342.769: 14.8116% ( 100) 00:11:46.359 11342.769 - 11393.182: 15.7641% ( 89) 00:11:46.359 11393.182 - 11443.594: 16.5561% ( 74) 00:11:46.359 11443.594 - 11494.006: 17.2838% ( 68) 00:11:46.359 11494.006 - 11544.418: 17.9045% ( 58) 00:11:46.359 11544.418 - 11594.831: 18.6002% ( 65) 00:11:46.359 11594.831 - 11645.243: 19.3814% ( 73) 00:11:46.359 11645.243 - 11695.655: 20.2483% ( 81) 00:11:46.359 11695.655 - 11746.068: 21.0509% ( 75) 00:11:46.359 11746.068 - 11796.480: 21.7252% ( 63) 00:11:46.359 11796.480 - 11846.892: 22.7098% ( 92) 00:11:46.359 11846.892 - 11897.305: 23.5552% ( 79) 00:11:46.359 11897.305 - 11947.717: 24.2402% ( 64) 00:11:46.359 11947.717 - 11998.129: 25.0749% ( 78) 00:11:46.359 11998.129 - 12048.542: 25.9097% ( 78) 00:11:46.359 12048.542 - 12098.954: 26.7444% ( 78) 00:11:46.359 12098.954 - 12149.366: 27.6862% ( 88) 00:11:46.359 12149.366 - 12199.778: 28.4461% ( 71) 00:11:46.359 12199.778 - 12250.191: 29.1310% ( 64) 00:11:46.359 12250.191 - 12300.603: 29.7731% ( 60) 00:11:46.359 12300.603 - 12351.015: 30.4580% ( 64) 00:11:46.359 12351.015 - 12401.428: 31.4319% ( 91) 00:11:46.359 12401.428 - 12451.840: 32.1276% ( 65) 00:11:46.359 12451.840 - 12502.252: 32.9088% ( 73) 00:11:46.359 12502.252 - 12552.665: 33.7115% ( 75) 00:11:46.359 12552.665 - 12603.077: 34.6211% ( 85) 00:11:46.359 12603.077 - 12653.489: 35.2740% ( 61) 00:11:46.359 12653.489 - 12703.902: 36.1301% ( 80) 00:11:46.359 12703.902 - 12754.314: 36.9863% ( 80) 00:11:46.359 12754.314 - 12804.726: 37.8639% ( 82) 00:11:46.359 12804.726 - 12855.138: 38.9341% ( 100) 00:11:46.359 12855.138 - 12905.551: 39.5441% ( 57) 00:11:46.359 12905.551 - 13006.375: 41.2243% ( 157) 00:11:46.359 13006.375 - 13107.200: 42.8296% ( 150) 00:11:46.359 13107.200 - 13208.025: 44.6490% ( 170) 00:11:46.359 13208.025 - 13308.849: 46.6824% ( 190) 00:11:46.359 13308.849 - 13409.674: 48.9191% ( 209) 00:11:46.359 13409.674 - 13510.498: 51.5411% ( 245) 00:11:46.359 13510.498 - 13611.323: 53.5103% ( 184) 00:11:46.359 13611.323 - 13712.148: 55.6935% ( 204) 00:11:46.359 13712.148 - 13812.972: 57.6948% ( 187) 00:11:46.359 13812.972 - 13913.797: 60.0920% ( 224) 00:11:46.359 13913.797 - 14014.622: 61.9542% ( 174) 00:11:46.359 14014.622 - 14115.446: 63.9020% ( 182) 00:11:46.359 14115.446 - 14216.271: 65.8604% ( 183) 00:11:46.359 14216.271 - 14317.095: 67.6798% ( 170) 00:11:46.359 14317.095 - 14417.920: 69.1781% ( 140) 00:11:46.359 14417.920 - 14518.745: 70.3874% ( 113) 00:11:46.359 14518.745 - 14619.569: 71.8429% ( 136) 00:11:46.359 14619.569 - 14720.394: 73.2128% ( 128) 00:11:46.359 14720.394 - 14821.218: 74.3472% ( 106) 00:11:46.359 14821.218 - 14922.043: 75.6314% ( 120) 00:11:46.359 14922.043 - 15022.868: 76.9692% ( 125) 00:11:46.359 15022.868 - 15123.692: 78.7778% ( 169) 00:11:46.359 15123.692 - 15224.517: 80.2547% ( 138) 00:11:46.359 15224.517 - 15325.342: 81.5711% ( 123) 00:11:46.359 15325.342 - 15426.166: 82.7804% ( 113) 00:11:46.359 15426.166 - 15526.991: 83.7864% ( 94) 00:11:46.359 15526.991 - 15627.815: 84.8138% ( 96) 00:11:46.359 15627.815 - 15728.640: 85.8840% ( 100) 00:11:46.359 15728.640 - 15829.465: 86.7723% ( 83) 00:11:46.359 15829.465 - 15930.289: 87.8104% ( 97) 00:11:46.359 15930.289 - 16031.114: 88.8164% ( 94) 00:11:46.359 16031.114 - 16131.938: 89.7046% ( 83) 00:11:46.359 16131.938 - 16232.763: 90.6785% ( 91) 00:11:46.359 16232.763 - 16333.588: 91.3313% ( 61) 00:11:46.359 16333.588 - 16434.412: 91.9414% ( 57) 00:11:46.359 16434.412 - 16535.237: 92.5728% ( 59) 00:11:46.359 16535.237 - 16636.062: 93.0116% ( 41) 00:11:46.359 16636.062 - 16736.886: 93.7179% ( 66) 00:11:46.359 16736.886 - 16837.711: 94.3172% ( 56) 00:11:46.359 16837.711 - 16938.535: 94.9700% ( 61) 00:11:46.359 16938.535 - 17039.360: 95.5801% ( 57) 00:11:46.359 17039.360 - 17140.185: 96.1152% ( 50) 00:11:46.359 17140.185 - 17241.009: 96.7038% ( 55) 00:11:46.359 17241.009 - 17341.834: 97.0248% ( 30) 00:11:46.359 17341.834 - 17442.658: 97.2496% ( 21) 00:11:46.359 17442.658 - 17543.483: 97.4422% ( 18) 00:11:46.359 17543.483 - 17644.308: 97.6348% ( 18) 00:11:46.359 17644.308 - 17745.132: 97.8168% ( 17) 00:11:46.359 17745.132 - 17845.957: 97.9452% ( 12) 00:11:46.359 17845.957 - 17946.782: 98.0415% ( 9) 00:11:46.359 17946.782 - 18047.606: 98.0843% ( 4) 00:11:46.359 18047.606 - 18148.431: 98.1057% ( 2) 00:11:46.359 18148.431 - 18249.255: 98.1378% ( 3) 00:11:46.359 18249.255 - 18350.080: 98.1807% ( 4) 00:11:46.359 18350.080 - 18450.905: 98.2235% ( 4) 00:11:46.359 18450.905 - 18551.729: 98.2556% ( 3) 00:11:46.359 18551.729 - 18652.554: 98.2877% ( 3) 00:11:46.359 18652.554 - 18753.378: 98.3305% ( 4) 00:11:46.359 18753.378 - 18854.203: 98.3733% ( 4) 00:11:46.359 18854.203 - 18955.028: 98.4375% ( 6) 00:11:46.359 18955.028 - 19055.852: 98.4803% ( 4) 00:11:46.359 19055.852 - 19156.677: 98.5124% ( 3) 00:11:46.359 19156.677 - 19257.502: 98.5766% ( 6) 00:11:46.360 19257.502 - 19358.326: 98.6194% ( 4) 00:11:46.360 19358.326 - 19459.151: 98.6301% ( 1) 00:11:46.360 25710.277 - 25811.102: 98.6729% ( 4) 00:11:46.360 25811.102 - 26012.751: 98.7693% ( 9) 00:11:46.360 26012.751 - 26214.400: 98.8228% ( 5) 00:11:46.360 26214.400 - 26416.049: 98.9084% ( 8) 00:11:46.360 26416.049 - 26617.698: 98.9833% ( 7) 00:11:46.360 26617.698 - 26819.348: 99.0582% ( 7) 00:11:46.360 26819.348 - 27020.997: 99.1331% ( 7) 00:11:46.360 27020.997 - 27222.646: 99.2188% ( 8) 00:11:46.360 27222.646 - 27424.295: 99.2937% ( 7) 00:11:46.360 27424.295 - 27625.945: 99.3151% ( 2) 00:11:46.360 35893.563 - 36095.212: 99.3258% ( 1) 00:11:46.360 36095.212 - 36296.862: 99.4328% ( 10) 00:11:46.360 36296.862 - 36498.511: 99.4863% ( 5) 00:11:46.360 36498.511 - 36700.160: 99.5719% ( 8) 00:11:46.360 36700.160 - 36901.809: 99.6468% ( 7) 00:11:46.360 36901.809 - 37103.458: 99.7217% ( 7) 00:11:46.360 37103.458 - 37305.108: 99.8074% ( 8) 00:11:46.360 37305.108 - 37506.757: 99.8930% ( 8) 00:11:46.360 37506.757 - 37708.406: 99.9679% ( 7) 00:11:46.360 37708.406 - 37910.055: 100.0000% ( 3) 00:11:46.360 00:11:46.360 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:46.360 ============================================================================== 00:11:46.360 Range in us Cumulative IO count 00:11:46.360 9578.338 - 9628.751: 0.0107% ( 1) 00:11:46.360 9679.163 - 9729.575: 0.0321% ( 2) 00:11:46.360 9729.575 - 9779.988: 0.0642% ( 3) 00:11:46.360 9779.988 - 9830.400: 0.0963% ( 3) 00:11:46.360 9830.400 - 9880.812: 0.1605% ( 6) 00:11:46.360 9880.812 - 9931.225: 0.2140% ( 5) 00:11:46.360 9931.225 - 9981.637: 0.3853% ( 16) 00:11:46.360 9981.637 - 10032.049: 0.4709% ( 8) 00:11:46.360 10032.049 - 10082.462: 0.6100% ( 13) 00:11:46.360 10082.462 - 10132.874: 0.8134% ( 19) 00:11:46.360 10132.874 - 10183.286: 1.0702% ( 24) 00:11:46.360 10183.286 - 10233.698: 1.4555% ( 36) 00:11:46.360 10233.698 - 10284.111: 1.8729% ( 39) 00:11:46.360 10284.111 - 10334.523: 2.4080% ( 50) 00:11:46.360 10334.523 - 10384.935: 2.9431% ( 50) 00:11:46.360 10384.935 - 10435.348: 3.3711% ( 40) 00:11:46.360 10435.348 - 10485.760: 3.7778% ( 38) 00:11:46.360 10485.760 - 10536.172: 4.1524% ( 35) 00:11:46.360 10536.172 - 10586.585: 4.5270% ( 35) 00:11:46.360 10586.585 - 10636.997: 4.8587% ( 31) 00:11:46.360 10636.997 - 10687.409: 5.2333% ( 35) 00:11:46.360 10687.409 - 10737.822: 5.6079% ( 35) 00:11:46.360 10737.822 - 10788.234: 6.1109% ( 47) 00:11:46.360 10788.234 - 10838.646: 6.7958% ( 64) 00:11:46.360 10838.646 - 10889.058: 7.5128% ( 67) 00:11:46.360 10889.058 - 10939.471: 8.2727% ( 71) 00:11:46.360 10939.471 - 10989.883: 9.1717% ( 84) 00:11:46.360 10989.883 - 11040.295: 10.0385% ( 81) 00:11:46.360 11040.295 - 11090.708: 11.1301% ( 102) 00:11:46.360 11090.708 - 11141.120: 12.0291% ( 84) 00:11:46.360 11141.120 - 11191.532: 12.9602% ( 87) 00:11:46.360 11191.532 - 11241.945: 13.8271% ( 81) 00:11:46.360 11241.945 - 11292.357: 14.6190% ( 74) 00:11:46.360 11292.357 - 11342.769: 15.7106% ( 102) 00:11:46.360 11342.769 - 11393.182: 16.6631% ( 89) 00:11:46.360 11393.182 - 11443.594: 17.3480% ( 64) 00:11:46.360 11443.594 - 11494.006: 18.2256% ( 82) 00:11:46.360 11494.006 - 11544.418: 19.2209% ( 93) 00:11:46.360 11544.418 - 11594.831: 20.0771% ( 80) 00:11:46.360 11594.831 - 11645.243: 20.9867% ( 85) 00:11:46.360 11645.243 - 11695.655: 21.8964% ( 85) 00:11:46.360 11695.655 - 11746.068: 22.9131% ( 95) 00:11:46.360 11746.068 - 11796.480: 23.8549% ( 88) 00:11:46.360 11796.480 - 11846.892: 24.7003% ( 79) 00:11:46.360 11846.892 - 11897.305: 25.6528% ( 89) 00:11:46.360 11897.305 - 11947.717: 26.2735% ( 58) 00:11:46.360 11947.717 - 11998.129: 27.0334% ( 71) 00:11:46.360 11998.129 - 12048.542: 27.6541% ( 58) 00:11:46.360 12048.542 - 12098.954: 28.1571% ( 47) 00:11:46.360 12098.954 - 12149.366: 28.6280% ( 44) 00:11:46.360 12149.366 - 12199.778: 29.0240% ( 37) 00:11:46.360 12199.778 - 12250.191: 29.3985% ( 35) 00:11:46.360 12250.191 - 12300.603: 29.8801% ( 45) 00:11:46.360 12300.603 - 12351.015: 30.4366% ( 52) 00:11:46.360 12351.015 - 12401.428: 31.0895% ( 61) 00:11:46.360 12401.428 - 12451.840: 31.8493% ( 71) 00:11:46.360 12451.840 - 12502.252: 32.7590% ( 85) 00:11:46.360 12502.252 - 12552.665: 33.8399% ( 101) 00:11:46.360 12552.665 - 12603.077: 35.0920% ( 117) 00:11:46.360 12603.077 - 12653.489: 36.0445% ( 89) 00:11:46.360 12653.489 - 12703.902: 37.1147% ( 100) 00:11:46.360 12703.902 - 12754.314: 38.2705% ( 108) 00:11:46.360 12754.314 - 12804.726: 39.1481% ( 82) 00:11:46.360 12804.726 - 12855.138: 39.8545% ( 66) 00:11:46.360 12855.138 - 12905.551: 40.7748% ( 86) 00:11:46.360 12905.551 - 13006.375: 42.0591% ( 120) 00:11:46.360 13006.375 - 13107.200: 43.3112% ( 117) 00:11:46.360 13107.200 - 13208.025: 44.5848% ( 119) 00:11:46.360 13208.025 - 13308.849: 46.2115% ( 152) 00:11:46.360 13308.849 - 13409.674: 47.9987% ( 167) 00:11:46.360 13409.674 - 13510.498: 49.7539% ( 164) 00:11:46.360 13510.498 - 13611.323: 51.6588% ( 178) 00:11:46.360 13611.323 - 13712.148: 53.8634% ( 206) 00:11:46.360 13712.148 - 13812.972: 55.9503% ( 195) 00:11:46.360 13812.972 - 13913.797: 58.2834% ( 218) 00:11:46.360 13913.797 - 14014.622: 60.6699% ( 223) 00:11:46.360 14014.622 - 14115.446: 62.9281% ( 211) 00:11:46.360 14115.446 - 14216.271: 65.1327% ( 206) 00:11:46.360 14216.271 - 14317.095: 67.1340% ( 187) 00:11:46.360 14317.095 - 14417.920: 69.1032% ( 184) 00:11:46.360 14417.920 - 14518.745: 71.0188% ( 179) 00:11:46.360 14518.745 - 14619.569: 72.5385% ( 142) 00:11:46.360 14619.569 - 14720.394: 74.0261% ( 139) 00:11:46.360 14720.394 - 14821.218: 75.6421% ( 151) 00:11:46.360 14821.218 - 14922.043: 77.2688% ( 152) 00:11:46.360 14922.043 - 15022.868: 78.8527% ( 148) 00:11:46.360 15022.868 - 15123.692: 80.3938% ( 144) 00:11:46.360 15123.692 - 15224.517: 81.6139% ( 114) 00:11:46.360 15224.517 - 15325.342: 82.6092% ( 93) 00:11:46.360 15325.342 - 15426.166: 83.4546% ( 79) 00:11:46.360 15426.166 - 15526.991: 84.2680% ( 76) 00:11:46.360 15526.991 - 15627.815: 85.1991% ( 87) 00:11:46.360 15627.815 - 15728.640: 86.3442% ( 107) 00:11:46.360 15728.640 - 15829.465: 87.3930% ( 98) 00:11:46.360 15829.465 - 15930.289: 88.3241% ( 87) 00:11:46.360 15930.289 - 16031.114: 88.9876% ( 62) 00:11:46.360 16031.114 - 16131.938: 89.5548% ( 53) 00:11:46.360 16131.938 - 16232.763: 90.3253% ( 72) 00:11:46.360 16232.763 - 16333.588: 90.9782% ( 61) 00:11:46.360 16333.588 - 16434.412: 91.6738% ( 65) 00:11:46.360 16434.412 - 16535.237: 92.3373% ( 62) 00:11:46.360 16535.237 - 16636.062: 93.0223% ( 64) 00:11:46.360 16636.062 - 16736.886: 93.6858% ( 62) 00:11:46.360 16736.886 - 16837.711: 94.5634% ( 82) 00:11:46.360 16837.711 - 16938.535: 95.0985% ( 50) 00:11:46.360 16938.535 - 17039.360: 95.5586% ( 43) 00:11:46.360 17039.360 - 17140.185: 96.0509% ( 46) 00:11:46.360 17140.185 - 17241.009: 96.3827% ( 31) 00:11:46.360 17241.009 - 17341.834: 96.6717% ( 27) 00:11:46.360 17341.834 - 17442.658: 97.0141% ( 32) 00:11:46.360 17442.658 - 17543.483: 97.1533% ( 13) 00:11:46.360 17543.483 - 17644.308: 97.2603% ( 10) 00:11:46.360 17644.308 - 17745.132: 97.3780% ( 11) 00:11:46.360 17745.132 - 17845.957: 97.5385% ( 15) 00:11:46.360 17845.957 - 17946.782: 97.7098% ( 16) 00:11:46.360 17946.782 - 18047.606: 97.8810% ( 16) 00:11:46.360 18047.606 - 18148.431: 98.0415% ( 15) 00:11:46.360 18148.431 - 18249.255: 98.1592% ( 11) 00:11:46.360 18249.255 - 18350.080: 98.2663% ( 10) 00:11:46.360 18350.080 - 18450.905: 98.3733% ( 10) 00:11:46.360 18450.905 - 18551.729: 98.4910% ( 11) 00:11:46.360 18551.729 - 18652.554: 98.5445% ( 5) 00:11:46.360 18652.554 - 18753.378: 98.5873% ( 4) 00:11:46.360 18753.378 - 18854.203: 98.6194% ( 3) 00:11:46.360 18854.203 - 18955.028: 98.6301% ( 1) 00:11:46.360 24802.855 - 24903.680: 98.6408% ( 1) 00:11:46.360 24903.680 - 25004.505: 98.6729% ( 3) 00:11:46.360 25004.505 - 25105.329: 98.7158% ( 4) 00:11:46.360 25105.329 - 25206.154: 98.7586% ( 4) 00:11:46.360 25206.154 - 25306.978: 98.8014% ( 4) 00:11:46.360 25306.978 - 25407.803: 98.8442% ( 4) 00:11:46.360 25407.803 - 25508.628: 98.8870% ( 4) 00:11:46.360 25508.628 - 25609.452: 98.9298% ( 4) 00:11:46.360 25609.452 - 25710.277: 98.9833% ( 5) 00:11:46.360 25710.277 - 25811.102: 99.0261% ( 4) 00:11:46.360 25811.102 - 26012.751: 99.1117% ( 8) 00:11:46.360 26012.751 - 26214.400: 99.1973% ( 8) 00:11:46.360 26214.400 - 26416.049: 99.2830% ( 8) 00:11:46.360 26416.049 - 26617.698: 99.3151% ( 3) 00:11:46.360 34885.317 - 35086.966: 99.3258% ( 1) 00:11:46.360 35086.966 - 35288.615: 99.3900% ( 6) 00:11:46.360 35288.615 - 35490.265: 99.4649% ( 7) 00:11:46.360 35490.265 - 35691.914: 99.5505% ( 8) 00:11:46.360 35691.914 - 35893.563: 99.6254% ( 7) 00:11:46.360 35893.563 - 36095.212: 99.7110% ( 8) 00:11:46.360 36095.212 - 36296.862: 99.7860% ( 7) 00:11:46.360 36296.862 - 36498.511: 99.8609% ( 7) 00:11:46.361 36498.511 - 36700.160: 99.9358% ( 7) 00:11:46.361 36700.160 - 36901.809: 100.0000% ( 6) 00:11:46.361 00:11:46.361 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:46.361 ============================================================================== 00:11:46.361 Range in us Cumulative IO count 00:11:46.361 9427.102 - 9477.514: 0.0321% ( 3) 00:11:46.361 9477.514 - 9527.926: 0.0963% ( 6) 00:11:46.361 9527.926 - 9578.338: 0.1498% ( 5) 00:11:46.361 9578.338 - 9628.751: 0.1926% ( 4) 00:11:46.361 9628.751 - 9679.163: 0.2354% ( 4) 00:11:46.361 9679.163 - 9729.575: 0.3425% ( 10) 00:11:46.361 9729.575 - 9779.988: 0.6314% ( 27) 00:11:46.361 9779.988 - 9830.400: 0.8990% ( 25) 00:11:46.361 9830.400 - 9880.812: 1.0381% ( 13) 00:11:46.361 9880.812 - 9931.225: 1.1879% ( 14) 00:11:46.361 9931.225 - 9981.637: 1.3271% ( 13) 00:11:46.361 9981.637 - 10032.049: 1.5304% ( 19) 00:11:46.361 10032.049 - 10082.462: 1.6909% ( 15) 00:11:46.361 10082.462 - 10132.874: 1.9478% ( 24) 00:11:46.361 10132.874 - 10183.286: 2.1725% ( 21) 00:11:46.361 10183.286 - 10233.698: 2.4080% ( 22) 00:11:46.361 10233.698 - 10284.111: 2.6541% ( 23) 00:11:46.361 10284.111 - 10334.523: 3.0180% ( 34) 00:11:46.361 10334.523 - 10384.935: 3.4140% ( 37) 00:11:46.361 10384.935 - 10435.348: 3.6280% ( 20) 00:11:46.361 10435.348 - 10485.760: 4.0240% ( 37) 00:11:46.361 10485.760 - 10536.172: 4.3022% ( 26) 00:11:46.361 10536.172 - 10586.585: 4.6661% ( 34) 00:11:46.361 10586.585 - 10636.997: 4.9872% ( 30) 00:11:46.361 10636.997 - 10687.409: 5.3403% ( 33) 00:11:46.361 10687.409 - 10737.822: 5.8219% ( 45) 00:11:46.361 10737.822 - 10788.234: 6.1644% ( 32) 00:11:46.361 10788.234 - 10838.646: 6.6353% ( 44) 00:11:46.361 10838.646 - 10889.058: 7.2132% ( 54) 00:11:46.361 10889.058 - 10939.471: 7.8553% ( 60) 00:11:46.361 10939.471 - 10989.883: 8.8292% ( 91) 00:11:46.361 10989.883 - 11040.295: 9.8673% ( 97) 00:11:46.361 11040.295 - 11090.708: 11.0980% ( 115) 00:11:46.361 11090.708 - 11141.120: 12.1789% ( 101) 00:11:46.361 11141.120 - 11191.532: 13.0779% ( 84) 00:11:46.361 11191.532 - 11241.945: 13.8271% ( 70) 00:11:46.361 11241.945 - 11292.357: 14.7367% ( 85) 00:11:46.361 11292.357 - 11342.769: 15.7106% ( 91) 00:11:46.361 11342.769 - 11393.182: 16.7380% ( 96) 00:11:46.361 11393.182 - 11443.594: 17.6049% ( 81) 00:11:46.361 11443.594 - 11494.006: 18.7821% ( 110) 00:11:46.361 11494.006 - 11544.418: 19.7132% ( 87) 00:11:46.361 11544.418 - 11594.831: 20.7620% ( 98) 00:11:46.361 11594.831 - 11645.243: 21.8429% ( 101) 00:11:46.361 11645.243 - 11695.655: 22.6884% ( 79) 00:11:46.361 11695.655 - 11746.068: 23.4482% ( 71) 00:11:46.361 11746.068 - 11796.480: 24.1117% ( 62) 00:11:46.361 11796.480 - 11846.892: 24.8288% ( 67) 00:11:46.361 11846.892 - 11897.305: 25.6742% ( 79) 00:11:46.361 11897.305 - 11947.717: 26.4769% ( 75) 00:11:46.361 11947.717 - 11998.129: 27.1618% ( 64) 00:11:46.361 11998.129 - 12048.542: 27.6220% ( 43) 00:11:46.361 12048.542 - 12098.954: 28.0822% ( 43) 00:11:46.361 12098.954 - 12149.366: 28.5317% ( 42) 00:11:46.361 12149.366 - 12199.778: 28.9812% ( 42) 00:11:46.361 12199.778 - 12250.191: 29.4307% ( 42) 00:11:46.361 12250.191 - 12300.603: 29.8908% ( 43) 00:11:46.361 12300.603 - 12351.015: 30.3938% ( 47) 00:11:46.361 12351.015 - 12401.428: 30.8219% ( 40) 00:11:46.361 12401.428 - 12451.840: 31.4854% ( 62) 00:11:46.361 12451.840 - 12502.252: 32.3309% ( 79) 00:11:46.361 12502.252 - 12552.665: 32.9944% ( 62) 00:11:46.361 12552.665 - 12603.077: 33.8934% ( 84) 00:11:46.361 12603.077 - 12653.489: 34.8887% ( 93) 00:11:46.361 12653.489 - 12703.902: 35.8305% ( 88) 00:11:46.361 12703.902 - 12754.314: 36.8686% ( 97) 00:11:46.361 12754.314 - 12804.726: 37.9281% ( 99) 00:11:46.361 12804.726 - 12855.138: 38.8378% ( 85) 00:11:46.361 12855.138 - 12905.551: 39.6511% ( 76) 00:11:46.361 12905.551 - 13006.375: 41.1815% ( 143) 00:11:46.361 13006.375 - 13107.200: 43.3968% ( 207) 00:11:46.361 13107.200 - 13208.025: 45.6229% ( 208) 00:11:46.361 13208.025 - 13308.849: 47.4743% ( 173) 00:11:46.361 13308.849 - 13409.674: 49.1438% ( 156) 00:11:46.361 13409.674 - 13510.498: 50.6956% ( 145) 00:11:46.361 13510.498 - 13611.323: 52.4936% ( 168) 00:11:46.361 13611.323 - 13712.148: 54.3664% ( 175) 00:11:46.361 13712.148 - 13812.972: 56.3784% ( 188) 00:11:46.361 13812.972 - 13913.797: 58.4546% ( 194) 00:11:46.361 13913.797 - 14014.622: 60.8091% ( 220) 00:11:46.361 14014.622 - 14115.446: 62.8104% ( 187) 00:11:46.361 14115.446 - 14216.271: 65.0043% ( 205) 00:11:46.361 14216.271 - 14317.095: 67.4122% ( 225) 00:11:46.361 14317.095 - 14417.920: 69.5312% ( 198) 00:11:46.361 14417.920 - 14518.745: 71.8964% ( 221) 00:11:46.361 14518.745 - 14619.569: 73.5766% ( 157) 00:11:46.361 14619.569 - 14720.394: 74.9144% ( 125) 00:11:46.361 14720.394 - 14821.218: 76.4127% ( 140) 00:11:46.361 14821.218 - 14922.043: 77.9324% ( 142) 00:11:46.361 14922.043 - 15022.868: 79.8159% ( 176) 00:11:46.361 15022.868 - 15123.692: 81.7744% ( 183) 00:11:46.361 15123.692 - 15224.517: 83.3797% ( 150) 00:11:46.361 15224.517 - 15325.342: 84.7068% ( 124) 00:11:46.361 15325.342 - 15426.166: 85.9803% ( 119) 00:11:46.361 15426.166 - 15526.991: 86.9328% ( 89) 00:11:46.361 15526.991 - 15627.815: 87.6284% ( 65) 00:11:46.361 15627.815 - 15728.640: 88.2705% ( 60) 00:11:46.361 15728.640 - 15829.465: 88.9341% ( 62) 00:11:46.361 15829.465 - 15930.289: 89.4157% ( 45) 00:11:46.361 15930.289 - 16031.114: 89.8652% ( 42) 00:11:46.361 16031.114 - 16131.938: 90.4110% ( 51) 00:11:46.361 16131.938 - 16232.763: 90.8818% ( 44) 00:11:46.361 16232.763 - 16333.588: 91.2885% ( 38) 00:11:46.361 16333.588 - 16434.412: 91.7059% ( 39) 00:11:46.361 16434.412 - 16535.237: 92.2945% ( 55) 00:11:46.361 16535.237 - 16636.062: 92.9259% ( 59) 00:11:46.361 16636.062 - 16736.886: 93.2577% ( 31) 00:11:46.361 16736.886 - 16837.711: 93.6323% ( 35) 00:11:46.361 16837.711 - 16938.535: 94.0176% ( 36) 00:11:46.361 16938.535 - 17039.360: 94.4777% ( 43) 00:11:46.361 17039.360 - 17140.185: 94.9593% ( 45) 00:11:46.361 17140.185 - 17241.009: 95.5693% ( 57) 00:11:46.361 17241.009 - 17341.834: 95.9011% ( 31) 00:11:46.361 17341.834 - 17442.658: 96.3292% ( 40) 00:11:46.361 17442.658 - 17543.483: 96.6717% ( 32) 00:11:46.361 17543.483 - 17644.308: 96.8857% ( 20) 00:11:46.361 17644.308 - 17745.132: 97.0890% ( 19) 00:11:46.361 17745.132 - 17845.957: 97.2817% ( 18) 00:11:46.361 17845.957 - 17946.782: 97.4101% ( 12) 00:11:46.361 17946.782 - 18047.606: 97.5813% ( 16) 00:11:46.361 18047.606 - 18148.431: 97.7847% ( 19) 00:11:46.361 18148.431 - 18249.255: 97.9559% ( 16) 00:11:46.361 18249.255 - 18350.080: 98.0522% ( 9) 00:11:46.361 18350.080 - 18450.905: 98.1057% ( 5) 00:11:46.361 18450.905 - 18551.729: 98.1271% ( 2) 00:11:46.361 18551.729 - 18652.554: 98.1807% ( 5) 00:11:46.361 18652.554 - 18753.378: 98.2449% ( 6) 00:11:46.361 18753.378 - 18854.203: 98.3091% ( 6) 00:11:46.361 18854.203 - 18955.028: 98.3626% ( 5) 00:11:46.361 18955.028 - 19055.852: 98.4268% ( 6) 00:11:46.361 19055.852 - 19156.677: 98.4910% ( 6) 00:11:46.361 19156.677 - 19257.502: 98.5659% ( 7) 00:11:46.361 19257.502 - 19358.326: 98.6194% ( 5) 00:11:46.361 19358.326 - 19459.151: 98.6301% ( 1) 00:11:46.361 24702.031 - 24802.855: 98.6408% ( 1) 00:11:46.361 24802.855 - 24903.680: 98.6729% ( 3) 00:11:46.361 24903.680 - 25004.505: 98.7158% ( 4) 00:11:46.361 25004.505 - 25105.329: 98.7586% ( 4) 00:11:46.361 25105.329 - 25206.154: 98.8014% ( 4) 00:11:46.361 25206.154 - 25306.978: 98.8549% ( 5) 00:11:46.361 25306.978 - 25407.803: 98.8870% ( 3) 00:11:46.361 25407.803 - 25508.628: 98.9405% ( 5) 00:11:46.361 25508.628 - 25609.452: 98.9833% ( 4) 00:11:46.361 25609.452 - 25710.277: 99.0261% ( 4) 00:11:46.361 25710.277 - 25811.102: 99.0689% ( 4) 00:11:46.361 25811.102 - 26012.751: 99.1545% ( 8) 00:11:46.361 26012.751 - 26214.400: 99.2402% ( 8) 00:11:46.361 26214.400 - 26416.049: 99.3151% ( 7) 00:11:46.361 35288.615 - 35490.265: 99.3579% ( 4) 00:11:46.361 35490.265 - 35691.914: 99.4221% ( 6) 00:11:46.361 35691.914 - 35893.563: 99.4863% ( 6) 00:11:46.361 35893.563 - 36095.212: 99.5612% ( 7) 00:11:46.361 36095.212 - 36296.862: 99.6361% ( 7) 00:11:46.362 36296.862 - 36498.511: 99.7003% ( 6) 00:11:46.362 36498.511 - 36700.160: 99.7753% ( 7) 00:11:46.362 36700.160 - 36901.809: 99.8502% ( 7) 00:11:46.362 36901.809 - 37103.458: 99.9358% ( 8) 00:11:46.362 37103.458 - 37305.108: 100.0000% ( 6) 00:11:46.362 00:11:46.362 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:46.362 ============================================================================== 00:11:46.362 Range in us Cumulative IO count 00:11:46.362 9175.040 - 9225.452: 0.0106% ( 1) 00:11:46.362 9275.865 - 9326.277: 0.0213% ( 1) 00:11:46.362 9326.277 - 9376.689: 0.0531% ( 3) 00:11:46.362 9376.689 - 9427.102: 0.0744% ( 2) 00:11:46.362 9427.102 - 9477.514: 0.1063% ( 3) 00:11:46.362 9477.514 - 9527.926: 0.1382% ( 3) 00:11:46.362 9527.926 - 9578.338: 0.1701% ( 3) 00:11:46.362 9578.338 - 9628.751: 0.1913% ( 2) 00:11:46.362 9628.751 - 9679.163: 0.2551% ( 6) 00:11:46.362 9679.163 - 9729.575: 0.3614% ( 10) 00:11:46.362 9729.575 - 9779.988: 0.5740% ( 20) 00:11:46.362 9779.988 - 9830.400: 0.8291% ( 24) 00:11:46.362 9830.400 - 9880.812: 1.0098% ( 17) 00:11:46.362 9880.812 - 9931.225: 1.1480% ( 13) 00:11:46.362 9931.225 - 9981.637: 1.3180% ( 16) 00:11:46.362 9981.637 - 10032.049: 1.4668% ( 14) 00:11:46.362 10032.049 - 10082.462: 1.6901% ( 21) 00:11:46.362 10082.462 - 10132.874: 1.8495% ( 15) 00:11:46.362 10132.874 - 10183.286: 2.0514% ( 19) 00:11:46.362 10183.286 - 10233.698: 2.3597% ( 29) 00:11:46.362 10233.698 - 10284.111: 2.7849% ( 40) 00:11:46.362 10284.111 - 10334.523: 3.1675% ( 36) 00:11:46.362 10334.523 - 10384.935: 3.5714% ( 38) 00:11:46.362 10384.935 - 10435.348: 3.9966% ( 40) 00:11:46.362 10435.348 - 10485.760: 4.5068% ( 48) 00:11:46.362 10485.760 - 10536.172: 5.1446% ( 60) 00:11:46.362 10536.172 - 10586.585: 5.7185% ( 54) 00:11:46.362 10586.585 - 10636.997: 6.0587% ( 32) 00:11:46.362 10636.997 - 10687.409: 6.4732% ( 39) 00:11:46.362 10687.409 - 10737.822: 7.0472% ( 54) 00:11:46.362 10737.822 - 10788.234: 7.6318% ( 55) 00:11:46.362 10788.234 - 10838.646: 8.0357% ( 38) 00:11:46.362 10838.646 - 10889.058: 8.4821% ( 42) 00:11:46.362 10889.058 - 10939.471: 8.9498% ( 44) 00:11:46.362 10939.471 - 10989.883: 9.4494% ( 47) 00:11:46.362 10989.883 - 11040.295: 10.0978% ( 61) 00:11:46.362 11040.295 - 11090.708: 10.8206% ( 68) 00:11:46.362 11090.708 - 11141.120: 11.6922% ( 82) 00:11:46.362 11141.120 - 11191.532: 12.5744% ( 83) 00:11:46.362 11191.532 - 11241.945: 13.7224% ( 108) 00:11:46.362 11241.945 - 11292.357: 14.5302% ( 76) 00:11:46.362 11292.357 - 11342.769: 15.1892% ( 62) 00:11:46.362 11342.769 - 11393.182: 15.7844% ( 56) 00:11:46.362 11393.182 - 11443.594: 16.3690% ( 55) 00:11:46.362 11443.594 - 11494.006: 17.1131% ( 70) 00:11:46.362 11494.006 - 11544.418: 17.9741% ( 81) 00:11:46.362 11544.418 - 11594.831: 18.7925% ( 77) 00:11:46.362 11594.831 - 11645.243: 19.8236% ( 97) 00:11:46.362 11645.243 - 11695.655: 20.7802% ( 90) 00:11:46.362 11695.655 - 11746.068: 21.7474% ( 91) 00:11:46.362 11746.068 - 11796.480: 22.7785% ( 97) 00:11:46.362 11796.480 - 11846.892: 23.8733% ( 103) 00:11:46.362 11846.892 - 11897.305: 24.7449% ( 82) 00:11:46.362 11897.305 - 11947.717: 25.4571% ( 67) 00:11:46.362 11947.717 - 11998.129: 26.0310% ( 54) 00:11:46.362 11998.129 - 12048.542: 26.5519% ( 49) 00:11:46.362 12048.542 - 12098.954: 26.9877% ( 41) 00:11:46.362 12098.954 - 12149.366: 27.3384% ( 33) 00:11:46.362 12149.366 - 12199.778: 27.6361% ( 28) 00:11:46.362 12199.778 - 12250.191: 27.9868% ( 33) 00:11:46.362 12250.191 - 12300.603: 28.4758% ( 46) 00:11:46.362 12300.603 - 12351.015: 28.9541% ( 45) 00:11:46.362 12351.015 - 12401.428: 29.5068% ( 52) 00:11:46.362 12401.428 - 12451.840: 30.1658% ( 62) 00:11:46.362 12451.840 - 12502.252: 31.0268% ( 81) 00:11:46.362 12502.252 - 12552.665: 31.9622% ( 88) 00:11:46.362 12552.665 - 12603.077: 32.7594% ( 75) 00:11:46.362 12603.077 - 12653.489: 33.6203% ( 81) 00:11:46.362 12653.489 - 12703.902: 34.6620% ( 98) 00:11:46.362 12703.902 - 12754.314: 35.4379% ( 73) 00:11:46.362 12754.314 - 12804.726: 36.3202% ( 83) 00:11:46.362 12804.726 - 12855.138: 37.1811% ( 81) 00:11:46.362 12855.138 - 12905.551: 38.3822% ( 113) 00:11:46.362 12905.551 - 13006.375: 40.6144% ( 210) 00:11:46.362 13006.375 - 13107.200: 42.8784% ( 213) 00:11:46.362 13107.200 - 13208.025: 45.0893% ( 208) 00:11:46.362 13208.025 - 13308.849: 47.5446% ( 231) 00:11:46.362 13308.849 - 13409.674: 49.5323% ( 187) 00:11:46.362 13409.674 - 13510.498: 51.2011% ( 157) 00:11:46.362 13510.498 - 13611.323: 52.7105% ( 142) 00:11:46.362 13611.323 - 13712.148: 54.2623% ( 146) 00:11:46.362 13712.148 - 13812.972: 56.0162% ( 165) 00:11:46.362 13812.972 - 13913.797: 58.2696% ( 212) 00:11:46.362 13913.797 - 14014.622: 60.5761% ( 217) 00:11:46.362 14014.622 - 14115.446: 63.2972% ( 256) 00:11:46.362 14115.446 - 14216.271: 65.6569% ( 222) 00:11:46.362 14216.271 - 14317.095: 68.2929% ( 248) 00:11:46.362 14317.095 - 14417.920: 70.1531% ( 175) 00:11:46.362 14417.920 - 14518.745: 72.0132% ( 175) 00:11:46.362 14518.745 - 14619.569: 73.5119% ( 141) 00:11:46.362 14619.569 - 14720.394: 75.0106% ( 141) 00:11:46.362 14720.394 - 14821.218: 76.2968% ( 121) 00:11:46.362 14821.218 - 14922.043: 77.6254% ( 125) 00:11:46.362 14922.043 - 15022.868: 79.1029% ( 139) 00:11:46.362 15022.868 - 15123.692: 80.4209% ( 124) 00:11:46.362 15123.692 - 15224.517: 81.7921% ( 129) 00:11:46.362 15224.517 - 15325.342: 83.1420% ( 127) 00:11:46.362 15325.342 - 15426.166: 84.5026% ( 128) 00:11:46.362 15426.166 - 15526.991: 85.4698% ( 91) 00:11:46.362 15526.991 - 15627.815: 86.5221% ( 99) 00:11:46.362 15627.815 - 15728.640: 87.5425% ( 96) 00:11:46.362 15728.640 - 15829.465: 88.7224% ( 111) 00:11:46.362 15829.465 - 15930.289: 90.0510% ( 125) 00:11:46.362 15930.289 - 16031.114: 90.6569% ( 57) 00:11:46.362 16031.114 - 16131.938: 91.1990% ( 51) 00:11:46.362 16131.938 - 16232.763: 91.6667% ( 44) 00:11:46.362 16232.763 - 16333.588: 92.0812% ( 39) 00:11:46.362 16333.588 - 16434.412: 92.4320% ( 33) 00:11:46.362 16434.412 - 16535.237: 92.7721% ( 32) 00:11:46.362 16535.237 - 16636.062: 93.3142% ( 51) 00:11:46.362 16636.062 - 16736.886: 93.7606% ( 42) 00:11:46.362 16736.886 - 16837.711: 94.4196% ( 62) 00:11:46.362 16837.711 - 16938.535: 94.8873% ( 44) 00:11:46.362 16938.535 - 17039.360: 95.3444% ( 43) 00:11:46.362 17039.360 - 17140.185: 95.8227% ( 45) 00:11:46.362 17140.185 - 17241.009: 96.2160% ( 37) 00:11:46.362 17241.009 - 17341.834: 96.6837% ( 44) 00:11:46.362 17341.834 - 17442.658: 96.9494% ( 25) 00:11:46.362 17442.658 - 17543.483: 97.1832% ( 22) 00:11:46.362 17543.483 - 17644.308: 97.3852% ( 19) 00:11:46.362 17644.308 - 17745.132: 97.5978% ( 20) 00:11:46.362 17745.132 - 17845.957: 97.7679% ( 16) 00:11:46.362 17845.957 - 17946.782: 97.9167% ( 14) 00:11:46.362 17946.782 - 18047.606: 98.0655% ( 14) 00:11:46.362 18047.606 - 18148.431: 98.2674% ( 19) 00:11:46.362 18148.431 - 18249.255: 98.4269% ( 15) 00:11:46.362 18249.255 - 18350.080: 98.5651% ( 13) 00:11:46.362 18350.080 - 18450.905: 98.7032% ( 13) 00:11:46.362 18450.905 - 18551.729: 98.8095% ( 10) 00:11:46.362 18551.729 - 18652.554: 98.8946% ( 8) 00:11:46.362 18652.554 - 18753.378: 98.9902% ( 9) 00:11:46.362 18753.378 - 18854.203: 99.0646% ( 7) 00:11:46.362 18854.203 - 18955.028: 99.1603% ( 9) 00:11:46.362 18955.028 - 19055.852: 99.2347% ( 7) 00:11:46.362 19055.852 - 19156.677: 99.2985% ( 6) 00:11:46.362 19156.677 - 19257.502: 99.3197% ( 2) 00:11:46.362 24802.855 - 24903.680: 99.3410% ( 2) 00:11:46.362 24903.680 - 25004.505: 99.3835% ( 4) 00:11:46.362 25004.505 - 25105.329: 99.4260% ( 4) 00:11:46.362 25105.329 - 25206.154: 99.4685% ( 4) 00:11:46.362 25206.154 - 25306.978: 99.5111% ( 4) 00:11:46.362 25306.978 - 25407.803: 99.5642% ( 5) 00:11:46.362 25407.803 - 25508.628: 99.6067% ( 4) 00:11:46.362 25508.628 - 25609.452: 99.6492% ( 4) 00:11:46.362 25609.452 - 25710.277: 99.6811% ( 3) 00:11:46.362 25710.277 - 25811.102: 99.7236% ( 4) 00:11:46.362 25811.102 - 26012.751: 99.8087% ( 8) 00:11:46.362 26012.751 - 26214.400: 99.9043% ( 9) 00:11:46.362 26214.400 - 26416.049: 99.9894% ( 8) 00:11:46.362 26416.049 - 26617.698: 100.0000% ( 1) 00:11:46.362 00:11:46.362 13:31:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:46.362 00:11:46.362 real 0m2.540s 00:11:46.362 user 0m2.216s 00:11:46.362 sys 0m0.200s 00:11:46.362 13:31:45 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.362 13:31:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:46.362 ************************************ 00:11:46.362 END TEST nvme_perf 00:11:46.362 ************************************ 00:11:46.362 13:31:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:46.362 13:31:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.362 13:31:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.362 13:31:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.362 ************************************ 00:11:46.362 START TEST nvme_hello_world 00:11:46.362 ************************************ 00:11:46.362 13:31:45 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:46.362 Initializing NVMe Controllers 00:11:46.362 Attached to 0000:00:11.0 00:11:46.362 Namespace ID: 1 size: 5GB 00:11:46.362 Attached to 0000:00:13.0 00:11:46.362 Namespace ID: 1 size: 1GB 00:11:46.362 Attached to 0000:00:10.0 00:11:46.362 Namespace ID: 1 size: 6GB 00:11:46.362 Attached to 0000:00:12.0 00:11:46.362 Namespace ID: 1 size: 4GB 00:11:46.363 Namespace ID: 2 size: 4GB 00:11:46.363 Namespace ID: 3 size: 4GB 00:11:46.363 Initialization complete. 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 INFO: using host memory buffer for IO 00:11:46.363 Hello world! 00:11:46.363 ************************************ 00:11:46.363 END TEST nvme_hello_world 00:11:46.363 ************************************ 00:11:46.363 00:11:46.363 real 0m0.250s 00:11:46.363 user 0m0.102s 00:11:46.363 sys 0m0.103s 00:11:46.363 13:31:45 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.363 13:31:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 13:31:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:46.363 13:31:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.363 13:31:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.363 13:31:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 ************************************ 00:11:46.363 START TEST nvme_sgl 00:11:46.363 ************************************ 00:11:46.363 13:31:45 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:46.623 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:46.623 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:46.623 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:46.623 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:46.623 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:46.623 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:46.623 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:46.623 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:46.623 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:46.624 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:46.885 NVMe Readv/Writev Request test 00:11:46.885 Attached to 0000:00:11.0 00:11:46.885 Attached to 0000:00:13.0 00:11:46.885 Attached to 0000:00:10.0 00:11:46.885 Attached to 0000:00:12.0 00:11:46.885 0000:00:11.0: build_io_request_2 test passed 00:11:46.885 0000:00:11.0: build_io_request_4 test passed 00:11:46.885 0000:00:11.0: build_io_request_5 test passed 00:11:46.885 0000:00:11.0: build_io_request_6 test passed 00:11:46.885 0000:00:11.0: build_io_request_7 test passed 00:11:46.885 0000:00:11.0: build_io_request_10 test passed 00:11:46.885 0000:00:10.0: build_io_request_2 test passed 00:11:46.885 0000:00:10.0: build_io_request_4 test passed 00:11:46.885 0000:00:10.0: build_io_request_5 test passed 00:11:46.885 0000:00:10.0: build_io_request_6 test passed 00:11:46.885 0000:00:10.0: build_io_request_7 test passed 00:11:46.885 0000:00:10.0: build_io_request_10 test passed 00:11:46.885 Cleaning up... 00:11:46.885 00:11:46.885 real 0m0.304s 00:11:46.885 user 0m0.150s 00:11:46.885 sys 0m0.110s 00:11:46.885 13:31:46 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.885 13:31:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 ************************************ 00:11:46.885 END TEST nvme_sgl 00:11:46.885 ************************************ 00:11:46.885 13:31:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:46.885 13:31:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.885 13:31:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.885 13:31:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.885 ************************************ 00:11:46.885 START TEST nvme_e2edp 00:11:46.885 ************************************ 00:11:46.885 13:31:46 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:47.145 NVMe Write/Read with End-to-End data protection test 00:11:47.145 Attached to 0000:00:11.0 00:11:47.145 Attached to 0000:00:13.0 00:11:47.145 Attached to 0000:00:10.0 00:11:47.145 Attached to 0000:00:12.0 00:11:47.145 Cleaning up... 00:11:47.145 00:11:47.145 real 0m0.234s 00:11:47.145 user 0m0.070s 00:11:47.145 sys 0m0.106s 00:11:47.145 13:31:46 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.145 13:31:46 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:47.145 ************************************ 00:11:47.145 END TEST nvme_e2edp 00:11:47.145 ************************************ 00:11:47.145 13:31:46 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:47.145 13:31:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.145 13:31:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.145 13:31:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.145 ************************************ 00:11:47.145 START TEST nvme_reserve 00:11:47.145 ************************************ 00:11:47.145 13:31:46 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:47.403 ===================================================== 00:11:47.403 NVMe Controller at PCI bus 0, device 17, function 0 00:11:47.403 ===================================================== 00:11:47.403 Reservations: Not Supported 00:11:47.403 ===================================================== 00:11:47.403 NVMe Controller at PCI bus 0, device 19, function 0 00:11:47.403 ===================================================== 00:11:47.403 Reservations: Not Supported 00:11:47.403 ===================================================== 00:11:47.403 NVMe Controller at PCI bus 0, device 16, function 0 00:11:47.403 ===================================================== 00:11:47.403 Reservations: Not Supported 00:11:47.404 ===================================================== 00:11:47.404 NVMe Controller at PCI bus 0, device 18, function 0 00:11:47.404 ===================================================== 00:11:47.404 Reservations: Not Supported 00:11:47.404 Reservation test passed 00:11:47.404 00:11:47.404 real 0m0.228s 00:11:47.404 user 0m0.087s 00:11:47.404 sys 0m0.093s 00:11:47.404 13:31:46 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.404 ************************************ 00:11:47.404 END TEST nvme_reserve 00:11:47.404 ************************************ 00:11:47.404 13:31:46 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:47.404 13:31:46 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:47.404 13:31:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.404 13:31:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.404 13:31:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.404 ************************************ 00:11:47.404 START TEST nvme_err_injection 00:11:47.404 ************************************ 00:11:47.404 13:31:46 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:47.664 NVMe Error Injection test 00:11:47.664 Attached to 0000:00:11.0 00:11:47.664 Attached to 0000:00:13.0 00:11:47.664 Attached to 0000:00:10.0 00:11:47.664 Attached to 0000:00:12.0 00:11:47.664 0000:00:11.0: get features failed as expected 00:11:47.664 0000:00:13.0: get features failed as expected 00:11:47.664 0000:00:10.0: get features failed as expected 00:11:47.664 0000:00:12.0: get features failed as expected 00:11:47.664 0000:00:11.0: get features successfully as expected 00:11:47.664 0000:00:13.0: get features successfully as expected 00:11:47.664 0000:00:10.0: get features successfully as expected 00:11:47.664 0000:00:12.0: get features successfully as expected 00:11:47.664 0000:00:12.0: read failed as expected 00:11:47.664 0000:00:11.0: read failed as expected 00:11:47.664 0000:00:13.0: read failed as expected 00:11:47.664 0000:00:10.0: read failed as expected 00:11:47.664 0000:00:13.0: read successfully as expected 00:11:47.664 0000:00:10.0: read successfully as expected 00:11:47.664 0000:00:12.0: read successfully as expected 00:11:47.664 0000:00:11.0: read successfully as expected 00:11:47.664 Cleaning up... 00:11:47.664 00:11:47.664 real 0m0.230s 00:11:47.664 user 0m0.090s 00:11:47.664 sys 0m0.095s 00:11:47.664 13:31:46 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.664 ************************************ 00:11:47.664 END TEST nvme_err_injection 00:11:47.664 ************************************ 00:11:47.664 13:31:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:47.664 13:31:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:47.664 13:31:46 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:47.664 13:31:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.664 13:31:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.664 ************************************ 00:11:47.664 START TEST nvme_overhead 00:11:47.664 ************************************ 00:11:47.664 13:31:47 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:49.048 Initializing NVMe Controllers 00:11:49.048 Attached to 0000:00:11.0 00:11:49.048 Attached to 0000:00:13.0 00:11:49.048 Attached to 0000:00:10.0 00:11:49.048 Attached to 0000:00:12.0 00:11:49.048 Initialization complete. Launching workers. 00:11:49.048 submit (in ns) avg, min, max = 12392.0, 9972.3, 83210.0 00:11:49.048 complete (in ns) avg, min, max = 8317.2, 7388.5, 107857.7 00:11:49.048 00:11:49.048 Submit histogram 00:11:49.048 ================ 00:11:49.048 Range in us Cumulative Count 00:11:49.048 9.945 - 9.994: 0.0266% ( 1) 00:11:49.048 10.289 - 10.338: 0.0532% ( 1) 00:11:49.048 10.634 - 10.683: 0.0797% ( 1) 00:11:49.048 10.683 - 10.732: 0.1063% ( 1) 00:11:49.048 10.732 - 10.782: 0.1595% ( 2) 00:11:49.048 10.782 - 10.831: 0.2127% ( 2) 00:11:49.048 10.831 - 10.880: 0.2924% ( 3) 00:11:49.048 10.880 - 10.929: 0.3721% ( 3) 00:11:49.048 10.929 - 10.978: 0.6114% ( 9) 00:11:49.048 10.978 - 11.028: 0.8240% ( 8) 00:11:49.048 11.028 - 11.077: 0.9835% ( 6) 00:11:49.048 11.077 - 11.126: 1.5417% ( 21) 00:11:49.048 11.126 - 11.175: 2.9506% ( 53) 00:11:49.048 11.175 - 11.225: 5.6619% ( 102) 00:11:49.048 11.225 - 11.274: 11.0048% ( 201) 00:11:49.048 11.274 - 11.323: 18.2616% ( 273) 00:11:49.048 11.323 - 11.372: 26.6082% ( 314) 00:11:49.048 11.372 - 11.422: 36.0447% ( 355) 00:11:49.048 11.422 - 11.471: 44.8166% ( 330) 00:11:49.048 11.471 - 11.520: 51.8607% ( 265) 00:11:49.048 11.520 - 11.569: 57.6023% ( 216) 00:11:49.048 11.569 - 11.618: 61.9086% ( 162) 00:11:49.048 11.618 - 11.668: 65.7363% ( 144) 00:11:49.048 11.668 - 11.717: 68.6869% ( 111) 00:11:49.048 11.717 - 11.766: 71.1324% ( 92) 00:11:49.048 11.766 - 11.815: 72.9399% ( 68) 00:11:49.048 11.815 - 11.865: 74.6146% ( 63) 00:11:49.048 11.865 - 11.914: 75.9171% ( 49) 00:11:49.048 11.914 - 11.963: 77.2727% ( 51) 00:11:49.048 11.963 - 12.012: 78.3094% ( 39) 00:11:49.048 12.012 - 12.062: 79.6385% ( 50) 00:11:49.048 12.062 - 12.111: 80.8612% ( 46) 00:11:49.048 12.111 - 12.160: 81.6587% ( 30) 00:11:49.048 12.160 - 12.209: 82.2967% ( 24) 00:11:49.048 12.209 - 12.258: 83.0144% ( 27) 00:11:49.048 12.258 - 12.308: 83.5460% ( 20) 00:11:49.048 12.308 - 12.357: 83.9447% ( 15) 00:11:49.048 12.357 - 12.406: 84.2903% ( 13) 00:11:49.048 12.406 - 12.455: 84.6358% ( 13) 00:11:49.048 12.455 - 12.505: 84.8485% ( 8) 00:11:49.048 12.505 - 12.554: 85.0877% ( 9) 00:11:49.048 12.554 - 12.603: 85.2206% ( 5) 00:11:49.048 12.603 - 12.702: 85.3535% ( 5) 00:11:49.048 12.702 - 12.800: 85.6194% ( 10) 00:11:49.048 12.800 - 12.898: 85.6725% ( 2) 00:11:49.048 12.898 - 12.997: 85.8320% ( 6) 00:11:49.048 12.997 - 13.095: 85.9383% ( 4) 00:11:49.048 13.095 - 13.194: 86.0447% ( 4) 00:11:49.048 13.194 - 13.292: 86.1244% ( 3) 00:11:49.048 13.292 - 13.391: 86.2573% ( 5) 00:11:49.048 13.391 - 13.489: 86.3902% ( 5) 00:11:49.048 13.489 - 13.588: 86.4965% ( 4) 00:11:49.048 13.588 - 13.686: 86.6295% ( 5) 00:11:49.048 13.686 - 13.785: 86.7358% ( 4) 00:11:49.048 13.785 - 13.883: 86.8953% ( 6) 00:11:49.048 13.883 - 13.982: 86.9484% ( 2) 00:11:49.048 13.982 - 14.080: 87.0813% ( 5) 00:11:49.048 14.080 - 14.178: 87.2142% ( 5) 00:11:49.048 14.178 - 14.277: 87.2674% ( 2) 00:11:49.048 14.277 - 14.375: 87.4269% ( 6) 00:11:49.048 14.375 - 14.474: 87.5864% ( 6) 00:11:49.048 14.474 - 14.572: 87.7459% ( 6) 00:11:49.048 14.572 - 14.671: 87.9054% ( 6) 00:11:49.048 14.671 - 14.769: 88.2243% ( 12) 00:11:49.048 14.769 - 14.868: 88.3307% ( 4) 00:11:49.048 14.868 - 14.966: 88.5433% ( 8) 00:11:49.048 14.966 - 15.065: 88.7560% ( 8) 00:11:49.048 15.065 - 15.163: 89.0218% ( 10) 00:11:49.048 15.163 - 15.262: 89.3142% ( 11) 00:11:49.048 15.262 - 15.360: 89.5003% ( 7) 00:11:49.048 15.360 - 15.458: 89.8724% ( 14) 00:11:49.048 15.458 - 15.557: 90.1116% ( 9) 00:11:49.048 15.557 - 15.655: 90.4040% ( 11) 00:11:49.048 15.655 - 15.754: 90.5635% ( 6) 00:11:49.048 15.754 - 15.852: 90.9623% ( 15) 00:11:49.048 15.852 - 15.951: 91.3078% ( 13) 00:11:49.048 15.951 - 16.049: 91.6268% ( 12) 00:11:49.048 16.049 - 16.148: 91.9458% ( 12) 00:11:49.048 16.148 - 16.246: 92.1850% ( 9) 00:11:49.048 16.246 - 16.345: 92.6369% ( 17) 00:11:49.048 16.345 - 16.443: 92.8495% ( 8) 00:11:49.048 16.443 - 16.542: 93.0090% ( 6) 00:11:49.048 16.542 - 16.640: 93.2217% ( 8) 00:11:49.048 16.640 - 16.738: 93.4078% ( 7) 00:11:49.048 16.738 - 16.837: 93.5673% ( 6) 00:11:49.048 16.837 - 16.935: 93.7002% ( 5) 00:11:49.049 16.935 - 17.034: 93.8331% ( 5) 00:11:49.049 17.034 - 17.132: 94.0191% ( 7) 00:11:49.049 17.132 - 17.231: 94.1520% ( 5) 00:11:49.049 17.231 - 17.329: 94.2850% ( 5) 00:11:49.049 17.329 - 17.428: 94.4976% ( 8) 00:11:49.049 17.428 - 17.526: 94.6837% ( 7) 00:11:49.049 17.526 - 17.625: 94.9229% ( 9) 00:11:49.049 17.625 - 17.723: 95.1887% ( 10) 00:11:49.049 17.723 - 17.822: 95.3482% ( 6) 00:11:49.049 17.822 - 17.920: 95.6140% ( 10) 00:11:49.049 17.920 - 18.018: 95.7469% ( 5) 00:11:49.049 18.018 - 18.117: 95.9064% ( 6) 00:11:49.049 18.117 - 18.215: 96.0128% ( 4) 00:11:49.049 18.215 - 18.314: 96.2520% ( 9) 00:11:49.049 18.314 - 18.412: 96.4115% ( 6) 00:11:49.049 18.412 - 18.511: 96.6507% ( 9) 00:11:49.049 18.511 - 18.609: 96.8368% ( 7) 00:11:49.049 18.609 - 18.708: 97.1823% ( 13) 00:11:49.049 18.708 - 18.806: 97.3418% ( 6) 00:11:49.049 18.806 - 18.905: 97.5279% ( 7) 00:11:49.049 18.905 - 19.003: 97.7140% ( 7) 00:11:49.049 19.003 - 19.102: 97.9532% ( 9) 00:11:49.049 19.102 - 19.200: 98.0330% ( 3) 00:11:49.049 19.200 - 19.298: 98.1659% ( 5) 00:11:49.049 19.298 - 19.397: 98.2722% ( 4) 00:11:49.049 19.397 - 19.495: 98.3254% ( 2) 00:11:49.049 19.495 - 19.594: 98.3785% ( 2) 00:11:49.049 19.692 - 19.791: 98.4051% ( 1) 00:11:49.049 19.791 - 19.889: 98.5114% ( 4) 00:11:49.049 19.889 - 19.988: 98.6178% ( 4) 00:11:49.049 19.988 - 20.086: 98.6975% ( 3) 00:11:49.049 20.086 - 20.185: 98.7241% ( 1) 00:11:49.049 20.185 - 20.283: 98.8038% ( 3) 00:11:49.049 20.283 - 20.382: 98.8570% ( 2) 00:11:49.049 20.578 - 20.677: 98.8836% ( 1) 00:11:49.049 20.677 - 20.775: 98.9367% ( 2) 00:11:49.049 20.874 - 20.972: 98.9633% ( 1) 00:11:49.049 21.071 - 21.169: 98.9899% ( 1) 00:11:49.049 21.169 - 21.268: 99.0431% ( 2) 00:11:49.049 21.268 - 21.366: 99.0696% ( 1) 00:11:49.049 21.465 - 21.563: 99.0962% ( 1) 00:11:49.049 21.760 - 21.858: 99.1228% ( 1) 00:11:49.049 21.957 - 22.055: 99.1760% ( 2) 00:11:49.049 22.252 - 22.351: 99.2026% ( 1) 00:11:49.049 22.351 - 22.449: 99.2291% ( 1) 00:11:49.049 22.449 - 22.548: 99.2557% ( 1) 00:11:49.049 22.548 - 22.646: 99.3089% ( 2) 00:11:49.049 22.843 - 22.942: 99.3620% ( 2) 00:11:49.049 23.138 - 23.237: 99.3886% ( 1) 00:11:49.049 23.237 - 23.335: 99.4152% ( 1) 00:11:49.049 23.335 - 23.434: 99.4418% ( 1) 00:11:49.049 23.532 - 23.631: 99.4684% ( 1) 00:11:49.049 23.631 - 23.729: 99.5215% ( 2) 00:11:49.049 24.320 - 24.418: 99.5481% ( 1) 00:11:49.049 24.615 - 24.714: 99.6013% ( 2) 00:11:49.049 25.009 - 25.108: 99.6279% ( 1) 00:11:49.049 26.388 - 26.585: 99.6544% ( 1) 00:11:49.049 27.175 - 27.372: 99.6810% ( 1) 00:11:49.049 28.357 - 28.554: 99.7342% ( 2) 00:11:49.049 28.554 - 28.751: 99.7608% ( 1) 00:11:49.049 29.342 - 29.538: 99.7873% ( 1) 00:11:49.049 30.326 - 30.523: 99.8139% ( 1) 00:11:49.049 41.551 - 41.748: 99.8405% ( 1) 00:11:49.049 55.532 - 55.926: 99.8671% ( 1) 00:11:49.049 57.108 - 57.502: 99.8937% ( 1) 00:11:49.049 57.502 - 57.895: 99.9203% ( 1) 00:11:49.049 61.046 - 61.440: 99.9468% ( 1) 00:11:49.049 78.769 - 79.163: 99.9734% ( 1) 00:11:49.049 83.102 - 83.495: 100.0000% ( 1) 00:11:49.049 00:11:49.049 Complete histogram 00:11:49.049 ================== 00:11:49.049 Range in us Cumulative Count 00:11:49.049 7.385 - 7.434: 0.2658% ( 10) 00:11:49.049 7.434 - 7.483: 1.0367% ( 29) 00:11:49.049 7.483 - 7.532: 4.3328% ( 124) 00:11:49.049 7.532 - 7.582: 9.7554% ( 204) 00:11:49.049 7.582 - 7.631: 18.2616% ( 320) 00:11:49.049 7.631 - 7.680: 27.9373% ( 364) 00:11:49.049 7.680 - 7.729: 36.5497% ( 324) 00:11:49.049 7.729 - 7.778: 43.5673% ( 264) 00:11:49.049 7.778 - 7.828: 48.2988% ( 178) 00:11:49.049 7.828 - 7.877: 51.9405% ( 137) 00:11:49.049 7.877 - 7.926: 54.6784% ( 103) 00:11:49.049 7.926 - 7.975: 56.6986% ( 76) 00:11:49.049 7.975 - 8.025: 58.7985% ( 79) 00:11:49.049 8.025 - 8.074: 61.2972% ( 94) 00:11:49.049 8.074 - 8.123: 63.8224% ( 95) 00:11:49.049 8.123 - 8.172: 66.7730% ( 111) 00:11:49.049 8.172 - 8.222: 70.3615% ( 135) 00:11:49.049 8.222 - 8.271: 73.9500% ( 135) 00:11:49.049 8.271 - 8.320: 76.6879% ( 103) 00:11:49.049 8.320 - 8.369: 79.1334% ( 92) 00:11:49.049 8.369 - 8.418: 81.0739% ( 73) 00:11:49.049 8.418 - 8.468: 83.2004% ( 80) 00:11:49.049 8.468 - 8.517: 85.0346% ( 69) 00:11:49.049 8.517 - 8.566: 86.2307% ( 45) 00:11:49.049 8.566 - 8.615: 87.2142% ( 37) 00:11:49.049 8.615 - 8.665: 88.1978% ( 37) 00:11:49.049 8.665 - 8.714: 89.0218% ( 31) 00:11:49.049 8.714 - 8.763: 89.5268% ( 19) 00:11:49.049 8.763 - 8.812: 89.9522% ( 16) 00:11:49.049 8.812 - 8.862: 90.6699% ( 27) 00:11:49.049 8.862 - 8.911: 91.0420% ( 14) 00:11:49.049 8.911 - 8.960: 91.4407% ( 15) 00:11:49.049 8.960 - 9.009: 91.7863% ( 13) 00:11:49.049 9.009 - 9.058: 91.9989% ( 8) 00:11:49.049 9.058 - 9.108: 92.3711% ( 14) 00:11:49.049 9.108 - 9.157: 92.6635% ( 11) 00:11:49.049 9.157 - 9.206: 92.9559% ( 11) 00:11:49.049 9.206 - 9.255: 93.1951% ( 9) 00:11:49.049 9.255 - 9.305: 93.4078% ( 8) 00:11:49.049 9.305 - 9.354: 93.8331% ( 16) 00:11:49.049 9.354 - 9.403: 94.0457% ( 8) 00:11:49.049 9.403 - 9.452: 94.1520% ( 4) 00:11:49.050 9.452 - 9.502: 94.3115% ( 6) 00:11:49.050 9.502 - 9.551: 94.4976% ( 7) 00:11:49.050 9.551 - 9.600: 94.5508% ( 2) 00:11:49.050 9.600 - 9.649: 94.6305% ( 3) 00:11:49.050 9.649 - 9.698: 94.7368% ( 4) 00:11:49.050 9.698 - 9.748: 94.8963% ( 6) 00:11:49.050 9.748 - 9.797: 94.9495% ( 2) 00:11:49.050 9.797 - 9.846: 94.9761% ( 1) 00:11:49.050 9.846 - 9.895: 95.0558% ( 3) 00:11:49.050 9.895 - 9.945: 95.1621% ( 4) 00:11:49.050 9.945 - 9.994: 95.1887% ( 1) 00:11:49.050 9.994 - 10.043: 95.2419% ( 2) 00:11:49.050 10.043 - 10.092: 95.2685% ( 1) 00:11:49.050 10.092 - 10.142: 95.2951% ( 1) 00:11:49.050 10.142 - 10.191: 95.3482% ( 2) 00:11:49.050 10.191 - 10.240: 95.3748% ( 1) 00:11:49.050 10.289 - 10.338: 95.4280% ( 2) 00:11:49.050 10.338 - 10.388: 95.4545% ( 1) 00:11:49.050 10.388 - 10.437: 95.4811% ( 1) 00:11:49.050 10.437 - 10.486: 95.5343% ( 2) 00:11:49.050 10.585 - 10.634: 95.5609% ( 1) 00:11:49.050 10.634 - 10.683: 95.6140% ( 2) 00:11:49.050 10.732 - 10.782: 95.6406% ( 1) 00:11:49.050 10.831 - 10.880: 95.6672% ( 1) 00:11:49.050 10.880 - 10.929: 95.6938% ( 1) 00:11:49.050 10.929 - 10.978: 95.7735% ( 3) 00:11:49.050 10.978 - 11.028: 95.8267% ( 2) 00:11:49.050 11.028 - 11.077: 95.8533% ( 1) 00:11:49.050 11.077 - 11.126: 95.9064% ( 2) 00:11:49.050 11.126 - 11.175: 96.0128% ( 4) 00:11:49.050 11.175 - 11.225: 96.0659% ( 2) 00:11:49.050 11.225 - 11.274: 96.1988% ( 5) 00:11:49.050 11.274 - 11.323: 96.3317% ( 5) 00:11:49.050 11.323 - 11.372: 96.4381% ( 4) 00:11:49.050 11.372 - 11.422: 96.5178% ( 3) 00:11:49.050 11.422 - 11.471: 96.5976% ( 3) 00:11:49.050 11.471 - 11.520: 96.6507% ( 2) 00:11:49.050 11.520 - 11.569: 96.7305% ( 3) 00:11:49.050 11.569 - 11.618: 96.9165% ( 7) 00:11:49.050 11.618 - 11.668: 96.9697% ( 2) 00:11:49.050 11.717 - 11.766: 97.1026% ( 5) 00:11:49.050 11.766 - 11.815: 97.2355% ( 5) 00:11:49.050 11.815 - 11.865: 97.2887% ( 2) 00:11:49.050 11.865 - 11.914: 97.3153% ( 1) 00:11:49.050 11.963 - 12.012: 97.3684% ( 2) 00:11:49.050 12.111 - 12.160: 97.3950% ( 1) 00:11:49.050 12.160 - 12.209: 97.4216% ( 1) 00:11:49.050 12.357 - 12.406: 97.4482% ( 1) 00:11:49.050 12.406 - 12.455: 97.4747% ( 1) 00:11:49.050 12.505 - 12.554: 97.5013% ( 1) 00:11:49.050 12.554 - 12.603: 97.5545% ( 2) 00:11:49.050 12.702 - 12.800: 97.6077% ( 2) 00:11:49.050 12.800 - 12.898: 97.6342% ( 1) 00:11:49.050 12.898 - 12.997: 97.6608% ( 1) 00:11:49.050 13.095 - 13.194: 97.6874% ( 1) 00:11:49.050 13.292 - 13.391: 97.7140% ( 1) 00:11:49.050 13.391 - 13.489: 97.7671% ( 2) 00:11:49.050 13.489 - 13.588: 97.8203% ( 2) 00:11:49.050 13.588 - 13.686: 97.9532% ( 5) 00:11:49.050 13.686 - 13.785: 98.0861% ( 5) 00:11:49.050 13.785 - 13.883: 98.1659% ( 3) 00:11:49.050 13.883 - 13.982: 98.2722% ( 4) 00:11:49.050 14.080 - 14.178: 98.3785% ( 4) 00:11:49.050 14.178 - 14.277: 98.4583% ( 3) 00:11:49.050 14.277 - 14.375: 98.4848% ( 1) 00:11:49.050 14.375 - 14.474: 98.5912% ( 4) 00:11:49.050 14.474 - 14.572: 98.6178% ( 1) 00:11:49.050 14.572 - 14.671: 98.6975% ( 3) 00:11:49.050 14.769 - 14.868: 98.7507% ( 2) 00:11:49.050 14.868 - 14.966: 98.8570% ( 4) 00:11:49.050 14.966 - 15.065: 98.8836% ( 1) 00:11:49.050 15.163 - 15.262: 98.9102% ( 1) 00:11:49.050 15.262 - 15.360: 98.9899% ( 3) 00:11:49.050 15.360 - 15.458: 99.0431% ( 2) 00:11:49.050 15.458 - 15.557: 99.0962% ( 2) 00:11:49.050 15.655 - 15.754: 99.1228% ( 1) 00:11:49.050 16.049 - 16.148: 99.1494% ( 1) 00:11:49.050 16.345 - 16.443: 99.2291% ( 3) 00:11:49.050 16.443 - 16.542: 99.2557% ( 1) 00:11:49.050 16.738 - 16.837: 99.2823% ( 1) 00:11:49.050 17.034 - 17.132: 99.3620% ( 3) 00:11:49.050 17.329 - 17.428: 99.3886% ( 1) 00:11:49.050 17.822 - 17.920: 99.4152% ( 1) 00:11:49.050 17.920 - 18.018: 99.4418% ( 1) 00:11:49.050 18.314 - 18.412: 99.4684% ( 1) 00:11:49.050 19.495 - 19.594: 99.4949% ( 1) 00:11:49.050 19.988 - 20.086: 99.5481% ( 2) 00:11:49.050 20.185 - 20.283: 99.5747% ( 1) 00:11:49.050 20.578 - 20.677: 99.6013% ( 1) 00:11:49.050 21.465 - 21.563: 99.6279% ( 1) 00:11:49.050 22.252 - 22.351: 99.6544% ( 1) 00:11:49.050 22.548 - 22.646: 99.7076% ( 2) 00:11:49.050 22.942 - 23.040: 99.7342% ( 1) 00:11:49.050 23.335 - 23.434: 99.7608% ( 1) 00:11:49.050 23.434 - 23.532: 99.7873% ( 1) 00:11:49.050 25.797 - 25.994: 99.8405% ( 2) 00:11:49.050 29.735 - 29.932: 99.8671% ( 1) 00:11:49.050 32.886 - 33.083: 99.8937% ( 1) 00:11:49.050 39.188 - 39.385: 99.9203% ( 1) 00:11:49.050 46.277 - 46.474: 99.9468% ( 1) 00:11:49.050 59.471 - 59.865: 99.9734% ( 1) 00:11:49.050 107.126 - 107.914: 100.0000% ( 1) 00:11:49.050 00:11:49.050 00:11:49.050 real 0m1.216s 00:11:49.050 user 0m1.082s 00:11:49.050 sys 0m0.089s 00:11:49.050 13:31:48 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.050 13:31:48 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 ************************************ 00:11:49.050 END TEST nvme_overhead 00:11:49.050 ************************************ 00:11:49.050 13:31:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:49.051 13:31:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:49.051 13:31:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.051 13:31:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:49.051 ************************************ 00:11:49.051 START TEST nvme_arbitration 00:11:49.051 ************************************ 00:11:49.051 13:31:48 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:52.370 Initializing NVMe Controllers 00:11:52.370 Attached to 0000:00:11.0 00:11:52.370 Attached to 0000:00:13.0 00:11:52.370 Attached to 0000:00:10.0 00:11:52.370 Attached to 0000:00:12.0 00:11:52.370 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:11:52.370 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:11:52.370 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:11:52.370 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:52.370 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:52.370 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:52.370 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:52.370 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:52.370 Initialization complete. Launching workers. 00:11:52.370 Starting thread on core 1 with urgent priority queue 00:11:52.370 Starting thread on core 2 with urgent priority queue 00:11:52.370 Starting thread on core 3 with urgent priority queue 00:11:52.370 Starting thread on core 0 with urgent priority queue 00:11:52.370 QEMU NVMe Ctrl (12341 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:11:52.370 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:11:52.370 QEMU NVMe Ctrl (12343 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:11:52.370 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:11:52.370 QEMU NVMe Ctrl (12340 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:11:52.370 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:11:52.370 ======================================================== 00:11:52.370 00:11:52.370 00:11:52.370 real 0m3.321s 00:11:52.370 user 0m9.248s 00:11:52.370 sys 0m0.125s 00:11:52.370 ************************************ 00:11:52.370 END TEST nvme_arbitration 00:11:52.370 ************************************ 00:11:52.370 13:31:51 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.370 13:31:51 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 13:31:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:52.370 13:31:51 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.370 13:31:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.370 13:31:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 ************************************ 00:11:52.370 START TEST nvme_single_aen 00:11:52.370 ************************************ 00:11:52.370 13:31:51 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:52.720 Asynchronous Event Request test 00:11:52.720 Attached to 0000:00:11.0 00:11:52.720 Attached to 0000:00:13.0 00:11:52.720 Attached to 0000:00:10.0 00:11:52.720 Attached to 0000:00:12.0 00:11:52.720 Reset controller to setup AER completions for this process 00:11:52.720 Registering asynchronous event callbacks... 00:11:52.720 Getting orig temperature thresholds of all controllers 00:11:52.720 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:52.720 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:52.720 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:52.720 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:52.720 Setting all controllers temperature threshold low to trigger AER 00:11:52.720 Waiting for all controllers temperature threshold to be set lower 00:11:52.720 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:52.720 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:52.720 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:52.720 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:52.720 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:52.720 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:52.720 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:52.720 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:52.720 Waiting for all controllers to trigger AER and reset threshold 00:11:52.720 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.720 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.720 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.720 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.720 Cleaning up... 00:11:52.720 00:11:52.720 real 0m0.231s 00:11:52.720 user 0m0.080s 00:11:52.720 sys 0m0.102s 00:11:52.720 ************************************ 00:11:52.720 END TEST nvme_single_aen 00:11:52.720 ************************************ 00:11:52.720 13:31:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.720 13:31:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:52.720 13:31:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:52.720 13:31:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.720 13:31:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.720 13:31:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:52.720 ************************************ 00:11:52.720 START TEST nvme_doorbell_aers 00:11:52.720 ************************************ 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:52.720 13:31:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:52.720 13:31:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:52.720 13:31:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:52.720 13:31:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:52.720 13:31:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:52.982 [2024-11-20 13:31:52.242287] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:03.027 Executing: test_write_invalid_db 00:12:03.027 Waiting for AER completion... 00:12:03.027 Failure: test_write_invalid_db 00:12:03.027 00:12:03.027 Executing: test_invalid_db_write_overflow_sq 00:12:03.027 Waiting for AER completion... 00:12:03.027 Failure: test_invalid_db_write_overflow_sq 00:12:03.027 00:12:03.027 Executing: test_invalid_db_write_overflow_cq 00:12:03.027 Waiting for AER completion... 00:12:03.027 Failure: test_invalid_db_write_overflow_cq 00:12:03.027 00:12:03.027 13:32:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:03.027 13:32:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:03.027 [2024-11-20 13:32:02.239088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:13.030 Executing: test_write_invalid_db 00:12:13.030 Waiting for AER completion... 00:12:13.030 Failure: test_write_invalid_db 00:12:13.030 00:12:13.030 Executing: test_invalid_db_write_overflow_sq 00:12:13.030 Waiting for AER completion... 00:12:13.030 Failure: test_invalid_db_write_overflow_sq 00:12:13.030 00:12:13.030 Executing: test_invalid_db_write_overflow_cq 00:12:13.030 Waiting for AER completion... 00:12:13.030 Failure: test_invalid_db_write_overflow_cq 00:12:13.030 00:12:13.030 13:32:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:13.030 13:32:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:13.030 [2024-11-20 13:32:12.285518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:22.994 Executing: test_write_invalid_db 00:12:22.994 Waiting for AER completion... 00:12:22.994 Failure: test_write_invalid_db 00:12:22.994 00:12:22.994 Executing: test_invalid_db_write_overflow_sq 00:12:22.994 Waiting for AER completion... 00:12:22.994 Failure: test_invalid_db_write_overflow_sq 00:12:22.994 00:12:22.994 Executing: test_invalid_db_write_overflow_cq 00:12:22.994 Waiting for AER completion... 00:12:22.994 Failure: test_invalid_db_write_overflow_cq 00:12:22.994 00:12:22.994 13:32:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:22.994 13:32:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:22.994 [2024-11-20 13:32:22.323168] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 Executing: test_write_invalid_db 00:12:32.997 Waiting for AER completion... 00:12:32.997 Failure: test_write_invalid_db 00:12:32.997 00:12:32.997 Executing: test_invalid_db_write_overflow_sq 00:12:32.997 Waiting for AER completion... 00:12:32.997 Failure: test_invalid_db_write_overflow_sq 00:12:32.997 00:12:32.997 Executing: test_invalid_db_write_overflow_cq 00:12:32.997 Waiting for AER completion... 00:12:32.997 Failure: test_invalid_db_write_overflow_cq 00:12:32.997 00:12:32.997 ************************************ 00:12:32.997 END TEST nvme_doorbell_aers 00:12:32.997 ************************************ 00:12:32.997 00:12:32.997 real 0m40.186s 00:12:32.997 user 0m34.321s 00:12:32.997 sys 0m5.497s 00:12:32.997 13:32:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.997 13:32:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:32.997 13:32:32 nvme -- nvme/nvme.sh@97 -- # uname 00:12:32.997 13:32:32 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:32.997 13:32:32 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:32.997 13:32:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:32.997 13:32:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.997 13:32:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.997 ************************************ 00:12:32.997 START TEST nvme_multi_aen 00:12:32.997 ************************************ 00:12:32.997 13:32:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:32.997 [2024-11-20 13:32:32.382429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.382507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.382519] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.385082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.385132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.385144] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.387114] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.387146] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.387155] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.388322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.388355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 [2024-11-20 13:32:32.388364] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63464) is not found. Dropping the request. 00:12:32.997 Child process pid: 63985 00:12:33.254 [Child] Asynchronous Event Request test 00:12:33.254 [Child] Attached to 0000:00:11.0 00:12:33.254 [Child] Attached to 0000:00:13.0 00:12:33.254 [Child] Attached to 0000:00:10.0 00:12:33.254 [Child] Attached to 0000:00:12.0 00:12:33.254 [Child] Registering asynchronous event callbacks... 00:12:33.254 [Child] Getting orig temperature thresholds of all controllers 00:12:33.255 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:33.255 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 [Child] Cleaning up... 00:12:33.255 Asynchronous Event Request test 00:12:33.255 Attached to 0000:00:11.0 00:12:33.255 Attached to 0000:00:13.0 00:12:33.255 Attached to 0000:00:10.0 00:12:33.255 Attached to 0000:00:12.0 00:12:33.255 Reset controller to setup AER completions for this process 00:12:33.255 Registering asynchronous event callbacks... 00:12:33.255 Getting orig temperature thresholds of all controllers 00:12:33.255 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:33.255 Setting all controllers temperature threshold low to trigger AER 00:12:33.255 Waiting for all controllers temperature threshold to be set lower 00:12:33.255 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:33.255 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:33.255 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:33.255 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:33.255 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:33.255 Waiting for all controllers to trigger AER and reset threshold 00:12:33.255 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:33.255 Cleaning up... 00:12:33.255 ************************************ 00:12:33.255 END TEST nvme_multi_aen 00:12:33.255 ************************************ 00:12:33.255 00:12:33.255 real 0m0.445s 00:12:33.255 user 0m0.146s 00:12:33.255 sys 0m0.184s 00:12:33.255 13:32:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.255 13:32:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:33.255 13:32:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:33.255 13:32:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.255 13:32:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.255 13:32:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.255 ************************************ 00:12:33.255 START TEST nvme_startup 00:12:33.255 ************************************ 00:12:33.255 13:32:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:33.513 Initializing NVMe Controllers 00:12:33.513 Attached to 0000:00:11.0 00:12:33.513 Attached to 0000:00:13.0 00:12:33.513 Attached to 0000:00:10.0 00:12:33.513 Attached to 0000:00:12.0 00:12:33.513 Initialization complete. 00:12:33.513 Time used:150195.297 (us). 00:12:33.513 ************************************ 00:12:33.513 END TEST nvme_startup 00:12:33.513 ************************************ 00:12:33.513 00:12:33.513 real 0m0.218s 00:12:33.513 user 0m0.068s 00:12:33.513 sys 0m0.108s 00:12:33.513 13:32:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.513 13:32:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:33.513 13:32:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:33.513 13:32:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.513 13:32:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.514 13:32:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.514 ************************************ 00:12:33.514 START TEST nvme_multi_secondary 00:12:33.514 ************************************ 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64035 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64036 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:33.514 13:32:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:36.819 Initializing NVMe Controllers 00:12:36.819 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:36.819 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:36.819 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:36.819 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:36.819 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:36.819 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:36.819 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:36.819 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:36.819 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:36.819 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:36.819 Initialization complete. Launching workers. 00:12:36.819 ======================================================== 00:12:36.819 Latency(us) 00:12:36.819 Device Information : IOPS MiB/s Average min max 00:12:36.819 PCIE (0000:00:11.0) NSID 1 from core 2: 3720.72 14.53 4299.87 764.37 13441.07 00:12:36.819 PCIE (0000:00:13.0) NSID 1 from core 2: 3715.39 14.51 4306.62 762.56 12243.95 00:12:36.819 PCIE (0000:00:10.0) NSID 1 from core 2: 3726.05 14.55 4299.97 750.17 11810.66 00:12:36.819 PCIE (0000:00:12.0) NSID 1 from core 2: 3758.03 14.68 4264.84 650.49 12422.78 00:12:36.819 PCIE (0000:00:12.0) NSID 2 from core 2: 3736.71 14.60 4288.44 769.23 12754.41 00:12:36.819 PCIE (0000:00:12.0) NSID 3 from core 2: 3736.71 14.60 4288.74 714.28 13320.00 00:12:36.819 ======================================================== 00:12:36.819 Total : 22393.62 87.48 4291.36 650.49 13441.07 00:12:36.819 00:12:36.819 13:32:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64035 00:12:37.076 Initializing NVMe Controllers 00:12:37.076 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:37.076 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:37.076 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:37.076 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:37.076 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:37.076 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:37.076 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:37.076 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:37.076 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:37.076 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:37.076 Initialization complete. Launching workers. 00:12:37.076 ======================================================== 00:12:37.076 Latency(us) 00:12:37.076 Device Information : IOPS MiB/s Average min max 00:12:37.076 PCIE (0000:00:11.0) NSID 1 from core 1: 8063.00 31.50 1984.34 598.11 10250.93 00:12:37.076 PCIE (0000:00:13.0) NSID 1 from core 1: 8258.00 32.26 1938.50 501.25 9939.77 00:12:37.076 PCIE (0000:00:10.0) NSID 1 from core 1: 7946.33 31.04 2012.67 622.64 10758.22 00:12:37.076 PCIE (0000:00:12.0) NSID 1 from core 1: 8205.67 32.05 1949.61 493.81 12276.88 00:12:37.076 PCIE (0000:00:12.0) NSID 2 from core 1: 8093.00 31.61 1978.98 474.33 10079.66 00:12:37.076 PCIE (0000:00:12.0) NSID 3 from core 1: 8148.33 31.83 1963.17 436.69 9702.00 00:12:37.076 ======================================================== 00:12:37.076 Total : 48714.33 190.29 1970.91 436.69 12276.88 00:12:37.076 00:12:38.975 Initializing NVMe Controllers 00:12:38.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:38.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:38.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:38.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:38.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:38.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:38.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:38.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:38.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:38.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:38.975 Initialization complete. Launching workers. 00:12:38.975 ======================================================== 00:12:38.975 Latency(us) 00:12:38.975 Device Information : IOPS MiB/s Average min max 00:12:38.975 PCIE (0000:00:11.0) NSID 1 from core 0: 10901.13 42.58 1467.37 209.97 10110.41 00:12:38.975 PCIE (0000:00:13.0) NSID 1 from core 0: 10921.73 42.66 1464.57 258.40 10590.25 00:12:38.975 PCIE (0000:00:10.0) NSID 1 from core 0: 10672.95 41.69 1497.84 248.67 11270.90 00:12:38.975 PCIE (0000:00:12.0) NSID 1 from core 0: 10831.94 42.31 1476.66 192.78 10774.64 00:12:38.975 PCIE (0000:00:12.0) NSID 2 from core 0: 10834.34 42.32 1476.30 239.12 11495.82 00:12:38.975 PCIE (0000:00:12.0) NSID 3 from core 0: 10882.33 42.51 1469.77 259.36 10867.86 00:12:38.975 ======================================================== 00:12:38.975 Total : 65044.41 254.08 1475.34 192.78 11495.82 00:12:38.975 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64036 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64111 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64112 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:38.975 13:32:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:42.397 Initializing NVMe Controllers 00:12:42.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:42.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:42.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:42.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:42.397 Initialization complete. Launching workers. 00:12:42.397 ======================================================== 00:12:42.397 Latency(us) 00:12:42.397 Device Information : IOPS MiB/s Average min max 00:12:42.397 PCIE (0000:00:11.0) NSID 1 from core 0: 7055.55 27.56 2267.22 715.47 8761.29 00:12:42.397 PCIE (0000:00:13.0) NSID 1 from core 0: 7055.55 27.56 2267.41 711.66 8257.79 00:12:42.397 PCIE (0000:00:10.0) NSID 1 from core 0: 7055.55 27.56 2266.35 683.12 10141.21 00:12:42.397 PCIE (0000:00:12.0) NSID 1 from core 0: 7055.55 27.56 2267.32 715.70 8804.62 00:12:42.397 PCIE (0000:00:12.0) NSID 2 from core 0: 7055.55 27.56 2267.28 707.09 9668.26 00:12:42.397 PCIE (0000:00:12.0) NSID 3 from core 0: 7055.55 27.56 2267.19 709.01 9137.13 00:12:42.397 ======================================================== 00:12:42.397 Total : 42333.32 165.36 2267.13 683.12 10141.21 00:12:42.397 00:12:42.397 Initializing NVMe Controllers 00:12:42.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:42.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:42.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:42.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:42.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:42.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:42.397 Initialization complete. Launching workers. 00:12:42.397 ======================================================== 00:12:42.397 Latency(us) 00:12:42.397 Device Information : IOPS MiB/s Average min max 00:12:42.397 PCIE (0000:00:11.0) NSID 1 from core 1: 6970.56 27.23 2294.92 755.56 7887.13 00:12:42.397 PCIE (0000:00:13.0) NSID 1 from core 1: 6970.56 27.23 2294.85 729.26 8093.05 00:12:42.397 PCIE (0000:00:10.0) NSID 1 from core 1: 6970.56 27.23 2293.75 707.44 8126.16 00:12:42.397 PCIE (0000:00:12.0) NSID 1 from core 1: 6970.56 27.23 2294.71 754.06 8165.16 00:12:42.397 PCIE (0000:00:12.0) NSID 2 from core 1: 6970.56 27.23 2294.62 646.70 8903.42 00:12:42.397 PCIE (0000:00:12.0) NSID 3 from core 1: 6970.56 27.23 2294.58 603.72 8646.62 00:12:42.397 ======================================================== 00:12:42.397 Total : 41823.36 163.37 2294.57 603.72 8903.42 00:12:42.397 00:12:44.942 Initializing NVMe Controllers 00:12:44.942 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:44.942 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:44.942 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:44.942 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:44.942 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:44.942 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:44.942 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:44.942 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:44.942 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:44.942 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:44.942 Initialization complete. Launching workers. 00:12:44.942 ======================================================== 00:12:44.942 Latency(us) 00:12:44.942 Device Information : IOPS MiB/s Average min max 00:12:44.942 PCIE (0000:00:11.0) NSID 1 from core 2: 3615.65 14.12 4424.57 774.58 21608.01 00:12:44.942 PCIE (0000:00:13.0) NSID 1 from core 2: 3615.65 14.12 4424.71 764.20 22026.78 00:12:44.942 PCIE (0000:00:10.0) NSID 1 from core 2: 3615.65 14.12 4422.61 748.69 22163.31 00:12:44.942 PCIE (0000:00:12.0) NSID 1 from core 2: 3615.65 14.12 4423.86 709.06 20429.33 00:12:44.942 PCIE (0000:00:12.0) NSID 2 from core 2: 3615.65 14.12 4424.21 766.91 18575.40 00:12:44.942 PCIE (0000:00:12.0) NSID 3 from core 2: 3615.65 14.12 4424.36 594.49 20271.39 00:12:44.942 ======================================================== 00:12:44.942 Total : 21693.90 84.74 4424.05 594.49 22163.31 00:12:44.942 00:12:44.942 ************************************ 00:12:44.942 END TEST nvme_multi_secondary 00:12:44.942 ************************************ 00:12:44.942 13:32:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64111 00:12:44.942 13:32:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64112 00:12:44.942 00:12:44.942 real 0m10.929s 00:12:44.942 user 0m18.339s 00:12:44.942 sys 0m0.672s 00:12:44.942 13:32:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.942 13:32:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:44.942 13:32:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:44.942 13:32:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:44.942 13:32:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63067 ]] 00:12:44.942 13:32:43 nvme -- common/autotest_common.sh@1094 -- # kill 63067 00:12:44.942 13:32:43 nvme -- common/autotest_common.sh@1095 -- # wait 63067 00:12:44.942 [2024-11-20 13:32:43.897507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.897557] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.897575] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.897585] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.900682] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.900772] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.900804] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.900854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.904887] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.904992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.905025] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.905065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.942 [2024-11-20 13:32:43.908873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.943 [2024-11-20 13:32:43.908908] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.943 [2024-11-20 13:32:43.908917] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.943 [2024-11-20 13:32:43.908927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63984) is not found. Dropping the request. 00:12:44.943 13:32:44 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:44.943 13:32:44 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:44.943 13:32:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:44.943 13:32:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.943 13:32:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.943 13:32:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.943 ************************************ 00:12:44.943 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:44.943 ************************************ 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:44.943 * Looking for test storage... 00:12:44.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.943 --rc genhtml_branch_coverage=1 00:12:44.943 --rc genhtml_function_coverage=1 00:12:44.943 --rc genhtml_legend=1 00:12:44.943 --rc geninfo_all_blocks=1 00:12:44.943 --rc geninfo_unexecuted_blocks=1 00:12:44.943 00:12:44.943 ' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.943 --rc genhtml_branch_coverage=1 00:12:44.943 --rc genhtml_function_coverage=1 00:12:44.943 --rc genhtml_legend=1 00:12:44.943 --rc geninfo_all_blocks=1 00:12:44.943 --rc geninfo_unexecuted_blocks=1 00:12:44.943 00:12:44.943 ' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.943 --rc genhtml_branch_coverage=1 00:12:44.943 --rc genhtml_function_coverage=1 00:12:44.943 --rc genhtml_legend=1 00:12:44.943 --rc geninfo_all_blocks=1 00:12:44.943 --rc geninfo_unexecuted_blocks=1 00:12:44.943 00:12:44.943 ' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.943 --rc genhtml_branch_coverage=1 00:12:44.943 --rc genhtml_function_coverage=1 00:12:44.943 --rc genhtml_legend=1 00:12:44.943 --rc geninfo_all_blocks=1 00:12:44.943 --rc geninfo_unexecuted_blocks=1 00:12:44.943 00:12:44.943 ' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:44.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64273 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64273 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64273 ']' 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.943 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.944 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.944 13:32:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:44.944 [2024-11-20 13:32:44.343641] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:44.944 [2024-11-20 13:32:44.343768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64273 ] 00:12:45.203 [2024-11-20 13:32:44.514762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.203 [2024-11-20 13:32:44.623543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.203 [2024-11-20 13:32:44.623756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.203 [2024-11-20 13:32:44.624214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.203 [2024-11-20 13:32:44.624226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:46.143 nvme0n1 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_P5gly.txt 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:46.143 true 00:12:46.143 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732109565 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64296 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:46.144 13:32:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:48.085 [2024-11-20 13:32:47.315502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:48.085 [2024-11-20 13:32:47.315827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:48.085 [2024-11-20 13:32:47.315857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:48.085 [2024-11-20 13:32:47.315872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.085 [2024-11-20 13:32:47.318214] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:48.085 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64296 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64296 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64296 00:12:48.085 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_P5gly.txt 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_P5gly.txt 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64273 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64273 ']' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64273 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64273 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.086 killing process with pid 64273 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64273' 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64273 00:12:48.086 13:32:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64273 00:12:50.000 13:32:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:50.000 13:32:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:50.000 ************************************ 00:12:50.000 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:50.000 ************************************ 00:12:50.000 00:12:50.000 real 0m5.134s 00:12:50.000 user 0m18.070s 00:12:50.000 sys 0m0.526s 00:12:50.000 13:32:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.000 13:32:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:50.000 13:32:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:50.000 13:32:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:50.000 13:32:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.000 13:32:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.000 13:32:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.000 ************************************ 00:12:50.000 START TEST nvme_fio 00:12:50.000 ************************************ 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:50.000 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:50.000 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:50.260 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:50.260 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:50.521 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:50.521 13:32:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:50.521 13:32:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:50.803 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:50.803 fio-3.35 00:12:50.803 Starting 1 thread 00:12:56.096 00:12:56.096 test: (groupid=0, jobs=1): err= 0: pid=64440: Wed Nov 20 13:32:54 2024 00:12:56.096 read: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(106MiB/2001msec) 00:12:56.096 slat (usec): min=4, max=587, avg= 7.04, stdev= 5.09 00:12:56.096 clat (usec): min=809, max=126515, avg=4372.42, stdev=5067.24 00:12:56.096 lat (usec): min=827, max=126520, avg=4379.46, stdev=5067.41 00:12:56.096 clat percentiles (msec): 00:12:56.096 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], 00:12:56.096 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 5], 00:12:56.096 | 70.00th=[ 5], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 7], 00:12:56.096 | 99.00th=[ 9], 99.50th=[ 28], 99.90th=[ 125], 99.95th=[ 126], 00:12:56.096 | 99.99th=[ 126] 00:12:56.096 bw ( KiB/s): min=39624, max=57408, per=94.25%, avg=50904.00, stdev=9806.90, samples=3 00:12:56.096 iops : min= 9906, max=14352, avg=12726.00, stdev=2451.73, samples=3 00:12:56.096 write: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(105MiB/2001msec); 0 zone resets 00:12:56.096 slat (usec): min=5, max=129, avg= 7.36, stdev= 3.49 00:12:56.096 clat (usec): min=833, max=135555, avg=5078.39, stdev=8825.87 00:12:56.096 lat (usec): min=840, max=135560, avg=5085.76, stdev=8825.87 00:12:56.096 clat percentiles (msec): 00:12:56.096 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], 00:12:56.096 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 5], 00:12:56.096 | 70.00th=[ 5], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 7], 00:12:56.096 | 99.00th=[ 41], 99.50th=[ 63], 99.90th=[ 134], 99.95th=[ 136], 00:12:56.096 | 99.99th=[ 136] 00:12:56.096 bw ( KiB/s): min=39008, max=57744, per=94.42%, avg=50949.33, stdev=10374.27, samples=3 00:12:56.096 iops : min= 9752, max=14436, avg=12737.33, stdev=2593.57, samples=3 00:12:56.096 lat (usec) : 1000=0.03% 00:12:56.096 lat (msec) : 2=0.60%, 4=57.74%, 10=40.35%, 20=0.10%, 50=0.67% 00:12:56.096 lat (msec) : 100=0.28%, 250=0.24% 00:12:56.096 cpu : usr=98.65%, sys=0.00%, ctx=4, majf=0, minf=607 00:12:56.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:56.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.096 issued rwts: total=27019,26993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.096 00:12:56.096 Run status group 0 (all jobs): 00:12:56.096 READ: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=106MiB (111MB), run=2001-2001msec 00:12:56.096 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=105MiB (111MB), run=2001-2001msec 00:12:56.096 ----------------------------------------------------- 00:12:56.096 Suppressions used: 00:12:56.096 count bytes template 00:12:56.096 1 32 /usr/src/fio/parse.c 00:12:56.096 1 8 libtcmalloc_minimal.so 00:12:56.096 ----------------------------------------------------- 00:12:56.096 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:56.096 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:56.357 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:56.357 13:32:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:56.357 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:56.357 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:56.357 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:56.358 13:32:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:56.358 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:56.358 fio-3.35 00:12:56.358 Starting 1 thread 00:13:11.296 00:13:11.296 test: (groupid=0, jobs=1): err= 0: pid=64498: Wed Nov 20 13:33:09 2024 00:13:11.296 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:13:11.296 slat (usec): min=4, max=102, avg= 6.24, stdev= 3.11 00:13:11.296 clat (usec): min=316, max=12636, avg=3646.31, stdev=1080.40 00:13:11.296 lat (usec): min=322, max=12641, avg=3652.55, stdev=1081.63 00:13:11.296 clat percentiles (usec): 00:13:11.296 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2769], 00:13:11.296 | 30.00th=[ 2933], 40.00th=[ 3163], 50.00th=[ 3359], 60.00th=[ 3621], 00:13:11.296 | 70.00th=[ 3949], 80.00th=[ 4359], 90.00th=[ 5211], 95.00th=[ 5800], 00:13:11.296 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8586], 99.95th=[ 9765], 00:13:11.296 | 99.99th=[11600] 00:13:11.296 bw ( KiB/s): min=65112, max=72480, per=99.64%, avg=69357.33, stdev=3810.14, samples=3 00:13:11.296 iops : min=16278, max=18120, avg=17339.33, stdev=952.53, samples=3 00:13:11.296 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2001msec); 0 zone resets 00:13:11.296 slat (usec): min=4, max=671, avg= 6.46, stdev= 4.73 00:13:11.296 clat (usec): min=335, max=8672, avg=3673.83, stdev=1061.17 00:13:11.296 lat (usec): min=342, max=8677, avg=3680.29, stdev=1062.42 00:13:11.296 clat percentiles (usec): 00:13:11.296 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2802], 00:13:11.296 | 30.00th=[ 2966], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3654], 00:13:11.296 | 70.00th=[ 3982], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5800], 00:13:11.296 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8029], 99.95th=[ 8160], 00:13:11.296 | 99.99th=[ 8455] 00:13:11.296 bw ( KiB/s): min=65568, max=72248, per=99.40%, avg=69277.33, stdev=3400.71, samples=3 00:13:11.296 iops : min=16392, max=18062, avg=17319.33, stdev=850.18, samples=3 00:13:11.296 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:11.296 lat (msec) : 2=0.41%, 4=70.85%, 10=28.68%, 20=0.02% 00:13:11.296 cpu : usr=98.20%, sys=0.25%, ctx=4, majf=0, minf=608 00:13:11.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:11.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:11.296 issued rwts: total=34822,34864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:11.296 00:13:11.296 Run status group 0 (all jobs): 00:13:11.296 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:13:11.296 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2001-2001msec 00:13:11.296 ----------------------------------------------------- 00:13:11.296 Suppressions used: 00:13:11.296 count bytes template 00:13:11.296 1 32 /usr/src/fio/parse.c 00:13:11.296 1 8 libtcmalloc_minimal.so 00:13:11.296 ----------------------------------------------------- 00:13:11.296 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:11.296 13:33:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:11.296 13:33:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:11.296 13:33:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:11.296 13:33:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:11.296 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:11.296 fio-3.35 00:13:11.296 Starting 1 thread 00:13:17.919 00:13:17.919 test: (groupid=0, jobs=1): err= 0: pid=64555: Wed Nov 20 13:33:16 2024 00:13:17.919 read: IOPS=16.1k, BW=62.9MiB/s (65.9MB/s)(126MiB/2001msec) 00:13:17.919 slat (nsec): min=4226, max=71356, avg=5812.90, stdev=3041.91 00:13:17.919 clat (usec): min=229, max=172209, avg=3720.97, stdev=5180.69 00:13:17.919 lat (usec): min=233, max=172214, avg=3726.78, stdev=5180.93 00:13:17.919 clat percentiles (usec): 00:13:17.919 | 1.00th=[ 1827], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:13:17.919 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3097], 60.00th=[ 3326], 00:13:17.919 | 70.00th=[ 3720], 80.00th=[ 4490], 90.00th=[ 5342], 95.00th=[ 6128], 00:13:17.919 | 99.00th=[ 7439], 99.50th=[ 8029], 99.90th=[ 47973], 99.95th=[168821], 00:13:17.919 | 99.99th=[168821] 00:13:17.919 bw ( KiB/s): min=42816, max=71488, per=95.52%, avg=61485.33, stdev=16181.91, samples=3 00:13:17.919 iops : min=10704, max=17872, avg=15371.33, stdev=4045.48, samples=3 00:13:17.919 write: IOPS=16.1k, BW=63.0MiB/s (66.0MB/s)(126MiB/2001msec); 0 zone resets 00:13:17.919 slat (usec): min=4, max=103, avg= 6.01, stdev= 3.15 00:13:17.919 clat (usec): min=237, max=181099, avg=4200.81, stdev=9890.46 00:13:17.919 lat (usec): min=242, max=181104, avg=4206.82, stdev=9890.53 00:13:17.919 clat percentiles (usec): 00:13:17.919 | 1.00th=[ 1876], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:13:17.919 | 30.00th=[ 2802], 40.00th=[ 2966], 50.00th=[ 3130], 60.00th=[ 3326], 00:13:17.919 | 70.00th=[ 3720], 80.00th=[ 4555], 90.00th=[ 5407], 95.00th=[ 6128], 00:13:17.919 | 99.00th=[ 7570], 99.50th=[ 49021], 99.90th=[175113], 99.95th=[179307], 00:13:17.919 | 99.99th=[181404] 00:13:17.919 bw ( KiB/s): min=42248, max=71488, per=94.62%, avg=61032.00, stdev=16302.18, samples=3 00:13:17.919 iops : min=10562, max=17872, avg=15258.00, stdev=4075.55, samples=3 00:13:17.919 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:13:17.919 lat (msec) : 2=1.37%, 4=72.25%, 10=25.89%, 20=0.02%, 50=0.11% 00:13:17.919 lat (msec) : 100=0.09%, 250=0.20% 00:13:17.919 cpu : usr=98.65%, sys=0.20%, ctx=4, majf=0, minf=607 00:13:17.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:17.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.919 issued rwts: total=32200,32266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.919 00:13:17.919 Run status group 0 (all jobs): 00:13:17.919 READ: bw=62.9MiB/s (65.9MB/s), 62.9MiB/s-62.9MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:13:17.919 WRITE: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:13:17.919 ----------------------------------------------------- 00:13:17.919 Suppressions used: 00:13:17.919 count bytes template 00:13:17.919 1 32 /usr/src/fio/parse.c 00:13:17.919 1 8 libtcmalloc_minimal.so 00:13:17.919 ----------------------------------------------------- 00:13:17.919 00:13:17.919 13:33:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:17.919 13:33:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:17.919 13:33:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:17.919 13:33:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:17.919 13:33:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:17.919 13:33:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:18.180 13:33:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:18.180 13:33:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:18.180 13:33:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:18.180 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:18.180 fio-3.35 00:13:18.180 Starting 1 thread 00:13:33.154 00:13:33.154 test: (groupid=0, jobs=1): err= 0: pid=64621: Wed Nov 20 13:33:31 2024 00:13:33.154 read: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2001msec) 00:13:33.154 slat (nsec): min=4243, max=73477, avg=5577.06, stdev=2749.38 00:13:33.154 clat (usec): min=342, max=9562, avg=3315.90, stdev=997.36 00:13:33.154 lat (usec): min=347, max=9566, avg=3321.48, stdev=998.53 00:13:33.154 clat percentiles (usec): 00:13:33.154 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:13:33.154 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3163], 00:13:33.154 | 70.00th=[ 3458], 80.00th=[ 3949], 90.00th=[ 4752], 95.00th=[ 5407], 00:13:33.154 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 7898], 00:13:33.154 | 99.99th=[ 9110] 00:13:33.154 bw ( KiB/s): min=72960, max=79248, per=99.49%, avg=76312.00, stdev=3164.57, samples=3 00:13:33.154 iops : min=18240, max=19812, avg=19078.00, stdev=791.14, samples=3 00:13:33.154 write: IOPS=19.2k, BW=74.8MiB/s (78.5MB/s)(150MiB/2001msec); 0 zone resets 00:13:33.154 slat (nsec): min=4309, max=58632, avg=5752.35, stdev=2742.52 00:13:33.154 clat (usec): min=372, max=8955, avg=3335.47, stdev=998.84 00:13:33.154 lat (usec): min=377, max=8972, avg=3341.23, stdev=999.96 00:13:33.154 clat percentiles (usec): 00:13:33.154 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:13:33.154 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3195], 00:13:33.154 | 70.00th=[ 3458], 80.00th=[ 3949], 90.00th=[ 4752], 95.00th=[ 5473], 00:13:33.154 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7701], 99.95th=[ 7832], 00:13:33.154 | 99.99th=[ 8094] 00:13:33.154 bw ( KiB/s): min=73000, max=79360, per=99.80%, avg=76474.67, stdev=3220.70, samples=3 00:13:33.154 iops : min=18250, max=19840, avg=19118.67, stdev=805.17, samples=3 00:13:33.154 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:33.154 lat (msec) : 2=0.71%, 4=80.06%, 10=19.20% 00:13:33.154 cpu : usr=98.90%, sys=0.00%, ctx=4, majf=0, minf=605 00:13:33.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:33.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.154 issued rwts: total=38370,38332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.154 00:13:33.154 Run status group 0 (all jobs): 00:13:33.154 READ: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:13:33.154 WRITE: bw=74.8MiB/s (78.5MB/s), 74.8MiB/s-74.8MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:13:33.154 ----------------------------------------------------- 00:13:33.154 Suppressions used: 00:13:33.154 count bytes template 00:13:33.154 1 32 /usr/src/fio/parse.c 00:13:33.154 1 8 libtcmalloc_minimal.so 00:13:33.154 ----------------------------------------------------- 00:13:33.154 00:13:33.154 13:33:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:33.154 ************************************ 00:13:33.154 END TEST nvme_fio 00:13:33.154 ************************************ 00:13:33.154 13:33:31 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:33.154 00:13:33.154 real 0m42.117s 00:13:33.154 user 0m25.787s 00:13:33.154 sys 0m29.622s 00:13:33.154 13:33:31 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.154 13:33:31 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:33.154 00:13:33.154 real 1m52.408s 00:13:33.154 user 3m49.118s 00:13:33.154 sys 0m40.032s 00:13:33.154 13:33:31 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.155 13:33:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.155 ************************************ 00:13:33.155 END TEST nvme 00:13:33.155 ************************************ 00:13:33.155 13:33:31 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:33.155 13:33:31 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:33.155 13:33:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.155 13:33:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.155 13:33:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.155 ************************************ 00:13:33.155 START TEST nvme_scc 00:13:33.155 ************************************ 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:33.155 * Looking for test storage... 00:13:33.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.155 --rc genhtml_branch_coverage=1 00:13:33.155 --rc genhtml_function_coverage=1 00:13:33.155 --rc genhtml_legend=1 00:13:33.155 --rc geninfo_all_blocks=1 00:13:33.155 --rc geninfo_unexecuted_blocks=1 00:13:33.155 00:13:33.155 ' 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.155 --rc genhtml_branch_coverage=1 00:13:33.155 --rc genhtml_function_coverage=1 00:13:33.155 --rc genhtml_legend=1 00:13:33.155 --rc geninfo_all_blocks=1 00:13:33.155 --rc geninfo_unexecuted_blocks=1 00:13:33.155 00:13:33.155 ' 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.155 --rc genhtml_branch_coverage=1 00:13:33.155 --rc genhtml_function_coverage=1 00:13:33.155 --rc genhtml_legend=1 00:13:33.155 --rc geninfo_all_blocks=1 00:13:33.155 --rc geninfo_unexecuted_blocks=1 00:13:33.155 00:13:33.155 ' 00:13:33.155 13:33:31 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.155 --rc genhtml_branch_coverage=1 00:13:33.155 --rc genhtml_function_coverage=1 00:13:33.155 --rc genhtml_legend=1 00:13:33.155 --rc geninfo_all_blocks=1 00:13:33.155 --rc geninfo_unexecuted_blocks=1 00:13:33.155 00:13:33.155 ' 00:13:33.155 13:33:31 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.155 13:33:31 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.155 13:33:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.155 13:33:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.155 13:33:31 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.155 13:33:31 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:33.155 13:33:31 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:33.155 13:33:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:33.155 13:33:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.155 13:33:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:33.155 13:33:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:33.155 13:33:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:33.155 13:33:31 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:33.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.155 Waiting for block devices as requested 00:13:33.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.155 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.155 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.155 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:38.462 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:38.462 13:33:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:38.462 13:33:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:38.462 13:33:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:38.462 13:33:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:38.462 13:33:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:38.462 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:38.463 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.464 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.465 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.466 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.467 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:38.468 13:33:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:38.468 13:33:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:38.468 13:33:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:38.468 13:33:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:38.468 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:38.469 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.470 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.471 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:38.472 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:38.473 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.474 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:38.475 13:33:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:38.475 13:33:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:38.475 13:33:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:38.475 13:33:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.475 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.476 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.477 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:38.478 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.479 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:38.480 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.481 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.482 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.483 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.484 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.485 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.486 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:38.487 13:33:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:38.487 13:33:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:38.487 13:33:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:38.487 13:33:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:38.487 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:38.488 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.489 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:38.490 13:33:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:38.490 13:33:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:38.750 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:38.751 13:33:37 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:38.751 13:33:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:38.751 13:33:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:38.751 13:33:37 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:39.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.594 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.594 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.594 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.594 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.594 13:33:38 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:39.594 13:33:38 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:39.594 13:33:38 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.594 13:33:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:39.594 ************************************ 00:13:39.594 START TEST nvme_simple_copy 00:13:39.594 ************************************ 00:13:39.594 13:33:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:39.864 Initializing NVMe Controllers 00:13:39.864 Attaching to 0000:00:10.0 00:13:39.864 Controller supports SCC. Attached to 0000:00:10.0 00:13:39.864 Namespace ID: 1 size: 6GB 00:13:39.864 Initialization complete. 00:13:39.864 00:13:39.864 Controller QEMU NVMe Ctrl (12340 ) 00:13:39.864 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:39.864 Namespace Block Size:4096 00:13:39.864 Writing LBAs 0 to 63 with Random Data 00:13:39.864 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:39.864 LBAs matching Written Data: 64 00:13:39.864 00:13:39.864 real 0m0.289s 00:13:39.864 user 0m0.111s 00:13:39.864 sys 0m0.073s 00:13:39.864 13:33:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.864 ************************************ 00:13:39.864 END TEST nvme_simple_copy 00:13:39.864 ************************************ 00:13:39.864 13:33:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:40.125 00:13:40.125 real 0m7.804s 00:13:40.125 user 0m1.168s 00:13:40.125 sys 0m1.389s 00:13:40.125 13:33:39 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.125 ************************************ 00:13:40.125 END TEST nvme_scc 00:13:40.125 ************************************ 00:13:40.125 13:33:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:40.125 13:33:39 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:40.125 13:33:39 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:40.125 13:33:39 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:40.125 13:33:39 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:40.125 13:33:39 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:40.125 13:33:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.125 13:33:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.125 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:13:40.125 ************************************ 00:13:40.125 START TEST nvme_fdp 00:13:40.125 ************************************ 00:13:40.125 13:33:39 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:40.125 * Looking for test storage... 00:13:40.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:40.125 13:33:39 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:40.125 13:33:39 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:40.125 13:33:39 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:40.125 13:33:39 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:40.125 13:33:39 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:40.126 13:33:39 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.126 13:33:39 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:40.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.126 --rc genhtml_branch_coverage=1 00:13:40.126 --rc genhtml_function_coverage=1 00:13:40.126 --rc genhtml_legend=1 00:13:40.126 --rc geninfo_all_blocks=1 00:13:40.126 --rc geninfo_unexecuted_blocks=1 00:13:40.126 00:13:40.126 ' 00:13:40.126 13:33:39 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:40.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.126 --rc genhtml_branch_coverage=1 00:13:40.126 --rc genhtml_function_coverage=1 00:13:40.126 --rc genhtml_legend=1 00:13:40.126 --rc geninfo_all_blocks=1 00:13:40.126 --rc geninfo_unexecuted_blocks=1 00:13:40.126 00:13:40.126 ' 00:13:40.126 13:33:39 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:40.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.126 --rc genhtml_branch_coverage=1 00:13:40.126 --rc genhtml_function_coverage=1 00:13:40.126 --rc genhtml_legend=1 00:13:40.126 --rc geninfo_all_blocks=1 00:13:40.126 --rc geninfo_unexecuted_blocks=1 00:13:40.126 00:13:40.126 ' 00:13:40.126 13:33:39 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:40.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.126 --rc genhtml_branch_coverage=1 00:13:40.126 --rc genhtml_function_coverage=1 00:13:40.126 --rc genhtml_legend=1 00:13:40.126 --rc geninfo_all_blocks=1 00:13:40.126 --rc geninfo_unexecuted_blocks=1 00:13:40.126 00:13:40.126 ' 00:13:40.126 13:33:39 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.126 13:33:39 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.126 13:33:39 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.126 13:33:39 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.126 13:33:39 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.126 13:33:39 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:40.126 13:33:39 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:40.126 13:33:39 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:40.126 13:33:39 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.126 13:33:39 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:40.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:40.647 Waiting for block devices as requested 00:13:40.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.907 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.907 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:46.212 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:46.212 13:33:45 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:46.212 13:33:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:46.212 13:33:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:46.212 13:33:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.212 13:33:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:46.212 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.213 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:46.214 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.215 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.216 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:46.217 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:46.218 13:33:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:46.218 13:33:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:46.218 13:33:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.218 13:33:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:46.218 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.219 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:46.220 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.221 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:46.222 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.223 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.224 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:46.225 13:33:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:46.225 13:33:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:46.225 13:33:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.225 13:33:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.225 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:46.226 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:46.227 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:46.228 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:46.229 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.230 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.231 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.232 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.233 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:46.234 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.235 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.236 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.499 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:46.500 13:33:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:46.501 13:33:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:46.501 13:33:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:46.501 13:33:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.501 13:33:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:46.501 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.502 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:46.503 13:33:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:46.504 13:33:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:46.504 13:33:45 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:46.504 13:33:45 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:46.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.333 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.333 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.333 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.333 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.333 13:33:46 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:47.334 13:33:46 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:47.334 13:33:46 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.334 13:33:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:47.594 ************************************ 00:13:47.594 START TEST nvme_flexible_data_placement 00:13:47.594 ************************************ 00:13:47.594 13:33:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:47.856 Initializing NVMe Controllers 00:13:47.856 Attaching to 0000:00:13.0 00:13:47.856 Controller supports FDP Attached to 0000:00:13.0 00:13:47.856 Namespace ID: 1 Endurance Group ID: 1 00:13:47.856 Initialization complete. 00:13:47.856 00:13:47.856 ================================== 00:13:47.856 == FDP tests for Namespace: #01 == 00:13:47.856 ================================== 00:13:47.856 00:13:47.856 Get Feature: FDP: 00:13:47.856 ================= 00:13:47.856 Enabled: Yes 00:13:47.856 FDP configuration Index: 0 00:13:47.856 00:13:47.856 FDP configurations log page 00:13:47.856 =========================== 00:13:47.856 Number of FDP configurations: 1 00:13:47.856 Version: 0 00:13:47.856 Size: 112 00:13:47.856 FDP Configuration Descriptor: 0 00:13:47.856 Descriptor Size: 96 00:13:47.856 Reclaim Group Identifier format: 2 00:13:47.856 FDP Volatile Write Cache: Not Present 00:13:47.856 FDP Configuration: Valid 00:13:47.856 Vendor Specific Size: 0 00:13:47.856 Number of Reclaim Groups: 2 00:13:47.856 Number of Recalim Unit Handles: 8 00:13:47.856 Max Placement Identifiers: 128 00:13:47.856 Number of Namespaces Suppprted: 256 00:13:47.856 Reclaim unit Nominal Size: 6000000 bytes 00:13:47.857 Estimated Reclaim Unit Time Limit: Not Reported 00:13:47.857 RUH Desc #000: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #001: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #002: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #003: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #004: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #005: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #006: RUH Type: Initially Isolated 00:13:47.857 RUH Desc #007: RUH Type: Initially Isolated 00:13:47.857 00:13:47.857 FDP reclaim unit handle usage log page 00:13:47.857 ====================================== 00:13:47.857 Number of Reclaim Unit Handles: 8 00:13:47.857 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:47.857 RUH Usage Desc #001: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #002: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #003: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #004: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #005: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #006: RUH Attributes: Unused 00:13:47.857 RUH Usage Desc #007: RUH Attributes: Unused 00:13:47.857 00:13:47.857 FDP statistics log page 00:13:47.857 ======================= 00:13:47.857 Host bytes with metadata written: 879128576 00:13:47.857 Media bytes with metadata written: 879362048 00:13:47.857 Media bytes erased: 0 00:13:47.857 00:13:47.857 FDP Reclaim unit handle status 00:13:47.857 ============================== 00:13:47.857 Number of RUHS descriptors: 2 00:13:47.857 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001999 00:13:47.857 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:47.857 00:13:47.857 FDP write on placement id: 0 success 00:13:47.857 00:13:47.857 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:47.857 00:13:47.857 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:47.857 00:13:47.857 Get Feature: FDP Events for Placement handle: #0 00:13:47.857 ======================== 00:13:47.857 Number of FDP Events: 6 00:13:47.857 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:47.857 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:47.857 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:47.857 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:47.857 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:47.857 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:47.857 00:13:47.857 FDP events log page 00:13:47.857 =================== 00:13:47.857 Number of FDP events: 1 00:13:47.857 FDP Event #0: 00:13:47.857 Event Type: RU Not Written to Capacity 00:13:47.857 Placement Identifier: Valid 00:13:47.857 NSID: Valid 00:13:47.857 Location: Valid 00:13:47.857 Placement Identifier: 0 00:13:47.857 Event Timestamp: 7 00:13:47.857 Namespace Identifier: 1 00:13:47.857 Reclaim Group Identifier: 0 00:13:47.857 Reclaim Unit Handle Identifier: 0 00:13:47.857 00:13:47.857 FDP test passed 00:13:47.857 00:13:47.857 real 0m0.305s 00:13:47.857 user 0m0.105s 00:13:47.857 sys 0m0.098s 00:13:47.857 ************************************ 00:13:47.857 END TEST nvme_flexible_data_placement 00:13:47.857 ************************************ 00:13:47.857 13:33:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.857 13:33:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:47.857 ************************************ 00:13:47.857 END TEST nvme_fdp 00:13:47.857 ************************************ 00:13:47.857 00:13:47.857 real 0m7.776s 00:13:47.857 user 0m1.089s 00:13:47.857 sys 0m1.540s 00:13:47.857 13:33:47 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.857 13:33:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:47.857 13:33:47 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:47.857 13:33:47 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:47.857 13:33:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.857 13:33:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.857 13:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:47.857 ************************************ 00:13:47.857 START TEST nvme_rpc 00:13:47.857 ************************************ 00:13:47.857 13:33:47 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:47.857 * Looking for test storage... 00:13:47.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:47.857 13:33:47 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:47.857 13:33:47 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:47.857 13:33:47 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.119 13:33:47 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.119 --rc genhtml_branch_coverage=1 00:13:48.119 --rc genhtml_function_coverage=1 00:13:48.119 --rc genhtml_legend=1 00:13:48.119 --rc geninfo_all_blocks=1 00:13:48.119 --rc geninfo_unexecuted_blocks=1 00:13:48.119 00:13:48.119 ' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.119 --rc genhtml_branch_coverage=1 00:13:48.119 --rc genhtml_function_coverage=1 00:13:48.119 --rc genhtml_legend=1 00:13:48.119 --rc geninfo_all_blocks=1 00:13:48.119 --rc geninfo_unexecuted_blocks=1 00:13:48.119 00:13:48.119 ' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.119 --rc genhtml_branch_coverage=1 00:13:48.119 --rc genhtml_function_coverage=1 00:13:48.119 --rc genhtml_legend=1 00:13:48.119 --rc geninfo_all_blocks=1 00:13:48.119 --rc geninfo_unexecuted_blocks=1 00:13:48.119 00:13:48.119 ' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.119 --rc genhtml_branch_coverage=1 00:13:48.119 --rc genhtml_function_coverage=1 00:13:48.119 --rc genhtml_legend=1 00:13:48.119 --rc geninfo_all_blocks=1 00:13:48.119 --rc geninfo_unexecuted_blocks=1 00:13:48.119 00:13:48.119 ' 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66002 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66002 00:13:48.119 13:33:47 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66002 ']' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.119 13:33:47 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.120 13:33:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.120 [2024-11-20 13:33:47.483258] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:48.120 [2024-11-20 13:33:47.483393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66002 ] 00:13:48.381 [2024-11-20 13:33:47.647603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.381 [2024-11-20 13:33:47.754217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.381 [2024-11-20 13:33:47.754400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.953 13:33:48 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.953 13:33:48 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:48.953 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:49.236 Nvme0n1 00:13:49.236 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:49.236 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:49.496 request: 00:13:49.496 { 00:13:49.496 "bdev_name": "Nvme0n1", 00:13:49.496 "filename": "non_existing_file", 00:13:49.496 "method": "bdev_nvme_apply_firmware", 00:13:49.496 "req_id": 1 00:13:49.496 } 00:13:49.496 Got JSON-RPC error response 00:13:49.496 response: 00:13:49.496 { 00:13:49.496 "code": -32603, 00:13:49.496 "message": "open file failed." 00:13:49.496 } 00:13:49.496 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:49.496 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:49.496 13:33:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:49.756 13:33:49 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:49.756 13:33:49 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66002 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66002 ']' 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66002 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66002 00:13:49.756 killing process with pid 66002 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.756 13:33:49 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.757 13:33:49 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66002' 00:13:49.757 13:33:49 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66002 00:13:49.757 13:33:49 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66002 00:13:51.664 ************************************ 00:13:51.664 END TEST nvme_rpc 00:13:51.664 ************************************ 00:13:51.664 00:13:51.664 real 0m3.402s 00:13:51.664 user 0m6.440s 00:13:51.664 sys 0m0.557s 00:13:51.664 13:33:50 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.664 13:33:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 13:33:50 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:51.664 13:33:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.664 13:33:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.664 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 ************************************ 00:13:51.664 START TEST nvme_rpc_timeouts 00:13:51.664 ************************************ 00:13:51.664 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:51.664 * Looking for test storage... 00:13:51.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:51.664 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:51.664 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:13:51.664 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:51.664 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.664 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:51.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.665 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.665 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.665 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.665 13:33:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.665 --rc genhtml_branch_coverage=1 00:13:51.665 --rc genhtml_function_coverage=1 00:13:51.665 --rc genhtml_legend=1 00:13:51.665 --rc geninfo_all_blocks=1 00:13:51.665 --rc geninfo_unexecuted_blocks=1 00:13:51.665 00:13:51.665 ' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.665 --rc genhtml_branch_coverage=1 00:13:51.665 --rc genhtml_function_coverage=1 00:13:51.665 --rc genhtml_legend=1 00:13:51.665 --rc geninfo_all_blocks=1 00:13:51.665 --rc geninfo_unexecuted_blocks=1 00:13:51.665 00:13:51.665 ' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.665 --rc genhtml_branch_coverage=1 00:13:51.665 --rc genhtml_function_coverage=1 00:13:51.665 --rc genhtml_legend=1 00:13:51.665 --rc geninfo_all_blocks=1 00:13:51.665 --rc geninfo_unexecuted_blocks=1 00:13:51.665 00:13:51.665 ' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:51.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.665 --rc genhtml_branch_coverage=1 00:13:51.665 --rc genhtml_function_coverage=1 00:13:51.665 --rc genhtml_legend=1 00:13:51.665 --rc geninfo_all_blocks=1 00:13:51.665 --rc geninfo_unexecuted_blocks=1 00:13:51.665 00:13:51.665 ' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66074 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66074 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66106 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66106 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66106 ']' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.665 13:33:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:51.665 13:33:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:51.665 [2024-11-20 13:33:50.853620] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:51.665 [2024-11-20 13:33:50.853749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66106 ] 00:13:51.665 [2024-11-20 13:33:51.015572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.926 [2024-11-20 13:33:51.120957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.926 [2024-11-20 13:33:51.121009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.493 Checking default timeout settings: 00:13:52.493 13:33:51 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.493 13:33:51 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:52.493 13:33:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:52.493 13:33:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:52.752 Making settings changes with rpc: 00:13:52.752 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:52.752 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:53.023 Check default vs. modified settings: 00:13:53.023 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:53.023 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 Setting action_on_timeout is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 Setting timeout_us is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.283 Setting timeout_admin_us is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66074 /tmp/settings_modified_66074 00:13:53.283 13:33:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66106 00:13:53.283 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66106 ']' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66106 00:13:53.283 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:53.283 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.283 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66106 00:13:53.543 killing process with pid 66106 00:13:53.543 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.543 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.543 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66106' 00:13:53.543 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66106 00:13:53.543 13:33:52 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66106 00:13:54.983 RPC TIMEOUT SETTING TEST PASSED. 00:13:54.984 13:33:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:54.984 ************************************ 00:13:54.984 END TEST nvme_rpc_timeouts 00:13:54.984 ************************************ 00:13:54.984 00:13:54.984 real 0m3.628s 00:13:54.984 user 0m7.047s 00:13:54.984 sys 0m0.546s 00:13:54.984 13:33:54 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.984 13:33:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 13:33:54 -- spdk/autotest.sh@239 -- # uname -s 00:13:54.984 13:33:54 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:54.984 13:33:54 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:54.984 13:33:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.984 13:33:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.984 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 ************************************ 00:13:54.984 START TEST sw_hotplug 00:13:54.984 ************************************ 00:13:54.984 13:33:54 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:55.297 * Looking for test storage... 00:13:55.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.297 13:33:54 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.297 --rc genhtml_branch_coverage=1 00:13:55.297 --rc genhtml_function_coverage=1 00:13:55.297 --rc genhtml_legend=1 00:13:55.297 --rc geninfo_all_blocks=1 00:13:55.297 --rc geninfo_unexecuted_blocks=1 00:13:55.297 00:13:55.297 ' 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.297 --rc genhtml_branch_coverage=1 00:13:55.297 --rc genhtml_function_coverage=1 00:13:55.297 --rc genhtml_legend=1 00:13:55.297 --rc geninfo_all_blocks=1 00:13:55.297 --rc geninfo_unexecuted_blocks=1 00:13:55.297 00:13:55.297 ' 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.297 --rc genhtml_branch_coverage=1 00:13:55.297 --rc genhtml_function_coverage=1 00:13:55.297 --rc genhtml_legend=1 00:13:55.297 --rc geninfo_all_blocks=1 00:13:55.297 --rc geninfo_unexecuted_blocks=1 00:13:55.297 00:13:55.297 ' 00:13:55.297 13:33:54 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.297 --rc genhtml_branch_coverage=1 00:13:55.297 --rc genhtml_function_coverage=1 00:13:55.297 --rc genhtml_legend=1 00:13:55.297 --rc geninfo_all_blocks=1 00:13:55.298 --rc geninfo_unexecuted_blocks=1 00:13:55.298 00:13:55.298 ' 00:13:55.298 13:33:54 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:55.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:55.559 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:55.559 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:55.559 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:55.559 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:55.559 13:33:54 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:55.559 13:33:54 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:55.559 13:33:54 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:55.559 13:33:54 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:55.559 13:33:54 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:55.821 13:33:54 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:55.821 13:33:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:55.821 13:33:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:55.821 13:33:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:55.821 13:33:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:56.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.344 Waiting for block devices as requested 00:13:56.344 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.344 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.344 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.605 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:01.904 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:01.904 13:34:00 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:01.904 13:34:00 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:01.904 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:01.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:01.904 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:02.161 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:02.418 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:02.418 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:02.418 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:02.418 13:34:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66963 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:02.676 13:34:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:02.676 13:34:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:02.676 13:34:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:02.676 13:34:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:02.676 13:34:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:02.676 13:34:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:02.676 Initializing NVMe Controllers 00:14:02.676 Attaching to 0000:00:10.0 00:14:02.676 Attaching to 0000:00:11.0 00:14:02.676 Attached to 0000:00:10.0 00:14:02.676 Attached to 0000:00:11.0 00:14:02.676 Initialization complete. Starting I/O... 00:14:02.676 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:02.676 QEMU NVMe Ctrl (12341 ): 3 I/Os completed (+3) 00:14:02.676 00:14:04.049 QEMU NVMe Ctrl (12340 ): 2557 I/Os completed (+2557) 00:14:04.049 QEMU NVMe Ctrl (12341 ): 2609 I/Os completed (+2606) 00:14:04.049 00:14:05.008 QEMU NVMe Ctrl (12340 ): 5637 I/Os completed (+3080) 00:14:05.008 QEMU NVMe Ctrl (12341 ): 5817 I/Os completed (+3208) 00:14:05.008 00:14:05.943 QEMU NVMe Ctrl (12340 ): 8688 I/Os completed (+3051) 00:14:05.943 QEMU NVMe Ctrl (12341 ): 8928 I/Os completed (+3111) 00:14:05.943 00:14:06.878 QEMU NVMe Ctrl (12340 ): 11798 I/Os completed (+3110) 00:14:06.878 QEMU NVMe Ctrl (12341 ): 12142 I/Os completed (+3214) 00:14:06.878 00:14:07.819 QEMU NVMe Ctrl (12340 ): 14862 I/Os completed (+3064) 00:14:07.819 QEMU NVMe Ctrl (12341 ): 15218 I/Os completed (+3076) 00:14:07.819 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:08.796 [2024-11-20 13:34:07.893652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:08.796 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:08.796 [2024-11-20 13:34:07.895003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.895132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.895172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.895246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:08.796 [2024-11-20 13:34:07.897252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.897321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.897352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.897380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:08.796 [2024-11-20 13:34:07.915066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:08.796 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:08.796 [2024-11-20 13:34:07.916414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.916548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.916573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.916590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:08.796 [2024-11-20 13:34:07.918428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.918522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.918594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 [2024-11-20 13:34:07.918624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:08.796 13:34:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:08.796 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:08.796 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:08.796 Attaching to 0000:00:10.0 00:14:08.797 Attached to 0000:00:10.0 00:14:08.797 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:08.797 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:08.797 13:34:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:08.797 Attaching to 0000:00:11.0 00:14:08.797 Attached to 0000:00:11.0 00:14:09.734 QEMU NVMe Ctrl (12340 ): 2930 I/Os completed (+2930) 00:14:09.735 QEMU NVMe Ctrl (12341 ): 2704 I/Os completed (+2704) 00:14:09.735 00:14:10.671 QEMU NVMe Ctrl (12340 ): 6126 I/Os completed (+3196) 00:14:10.671 QEMU NVMe Ctrl (12341 ): 5890 I/Os completed (+3186) 00:14:10.671 00:14:12.056 QEMU NVMe Ctrl (12340 ): 8997 I/Os completed (+2871) 00:14:12.056 QEMU NVMe Ctrl (12341 ): 8694 I/Os completed (+2804) 00:14:12.056 00:14:12.990 QEMU NVMe Ctrl (12340 ): 12152 I/Os completed (+3155) 00:14:12.991 QEMU NVMe Ctrl (12341 ): 11915 I/Os completed (+3221) 00:14:12.991 00:14:13.924 QEMU NVMe Ctrl (12340 ): 15736 I/Os completed (+3584) 00:14:13.924 QEMU NVMe Ctrl (12341 ): 15499 I/Os completed (+3584) 00:14:13.924 00:14:14.856 QEMU NVMe Ctrl (12340 ): 19144 I/Os completed (+3408) 00:14:14.856 QEMU NVMe Ctrl (12341 ): 18931 I/Os completed (+3432) 00:14:14.856 00:14:15.797 QEMU NVMe Ctrl (12340 ): 22143 I/Os completed (+2999) 00:14:15.797 QEMU NVMe Ctrl (12341 ): 22085 I/Os completed (+3154) 00:14:15.797 00:14:16.735 QEMU NVMe Ctrl (12340 ): 25295 I/Os completed (+3152) 00:14:16.735 QEMU NVMe Ctrl (12341 ): 25372 I/Os completed (+3287) 00:14:16.735 00:14:17.670 QEMU NVMe Ctrl (12340 ): 28462 I/Os completed (+3167) 00:14:17.670 QEMU NVMe Ctrl (12341 ): 28673 I/Os completed (+3301) 00:14:17.670 00:14:19.048 QEMU NVMe Ctrl (12340 ): 31522 I/Os completed (+3060) 00:14:19.048 QEMU NVMe Ctrl (12341 ): 31753 I/Os completed (+3080) 00:14:19.048 00:14:19.985 QEMU NVMe Ctrl (12340 ): 34632 I/Os completed (+3110) 00:14:19.985 QEMU NVMe Ctrl (12341 ): 34856 I/Os completed (+3103) 00:14:19.985 00:14:20.919 QEMU NVMe Ctrl (12340 ): 37604 I/Os completed (+2972) 00:14:20.919 QEMU NVMe Ctrl (12341 ): 37890 I/Os completed (+3034) 00:14:20.919 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.919 [2024-11-20 13:34:20.213338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:20.919 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:20.919 [2024-11-20 13:34:20.214920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.215114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.215266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.215298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:20.919 [2024-11-20 13:34:20.217651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.217709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.217734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.217760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.919 [2024-11-20 13:34:20.238124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:20.919 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:20.919 [2024-11-20 13:34:20.239366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.239418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.239440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.239455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:20.919 [2024-11-20 13:34:20.241288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.241339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.241356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 [2024-11-20 13:34:20.241371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:20.919 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:20.919 EAL: Scan for (pci) bus failed. 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.919 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:21.178 Attaching to 0000:00:10.0 00:14:21.178 Attached to 0000:00:10.0 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:21.178 13:34:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:21.178 Attaching to 0000:00:11.0 00:14:21.178 Attached to 0000:00:11.0 00:14:21.743 QEMU NVMe Ctrl (12340 ): 2266 I/Os completed (+2266) 00:14:21.743 QEMU NVMe Ctrl (12341 ): 2052 I/Os completed (+2052) 00:14:21.743 00:14:22.677 QEMU NVMe Ctrl (12340 ): 5351 I/Os completed (+3085) 00:14:22.677 QEMU NVMe Ctrl (12341 ): 5227 I/Os completed (+3175) 00:14:22.677 00:14:24.061 QEMU NVMe Ctrl (12340 ): 8343 I/Os completed (+2992) 00:14:24.061 QEMU NVMe Ctrl (12341 ): 8272 I/Os completed (+3045) 00:14:24.061 00:14:24.996 QEMU NVMe Ctrl (12340 ): 11491 I/Os completed (+3148) 00:14:24.996 QEMU NVMe Ctrl (12341 ): 11435 I/Os completed (+3163) 00:14:24.996 00:14:25.941 QEMU NVMe Ctrl (12340 ): 14374 I/Os completed (+2883) 00:14:25.941 QEMU NVMe Ctrl (12341 ): 14460 I/Os completed (+3025) 00:14:25.941 00:14:26.877 QEMU NVMe Ctrl (12340 ): 17246 I/Os completed (+2872) 00:14:26.877 QEMU NVMe Ctrl (12341 ): 17452 I/Os completed (+2992) 00:14:26.877 00:14:27.818 QEMU NVMe Ctrl (12340 ): 20117 I/Os completed (+2871) 00:14:27.818 QEMU NVMe Ctrl (12341 ): 20362 I/Os completed (+2910) 00:14:27.818 00:14:28.837 QEMU NVMe Ctrl (12340 ): 22765 I/Os completed (+2648) 00:14:28.837 QEMU NVMe Ctrl (12341 ): 23018 I/Os completed (+2656) 00:14:28.837 00:14:29.771 QEMU NVMe Ctrl (12340 ): 26058 I/Os completed (+3293) 00:14:29.771 QEMU NVMe Ctrl (12341 ): 26449 I/Os completed (+3431) 00:14:29.771 00:14:30.709 QEMU NVMe Ctrl (12340 ): 29007 I/Os completed (+2949) 00:14:30.709 QEMU NVMe Ctrl (12341 ): 29452 I/Os completed (+3003) 00:14:30.709 00:14:32.081 QEMU NVMe Ctrl (12340 ): 31884 I/Os completed (+2877) 00:14:32.081 QEMU NVMe Ctrl (12341 ): 32503 I/Os completed (+3051) 00:14:32.081 00:14:33.013 QEMU NVMe Ctrl (12340 ): 35082 I/Os completed (+3198) 00:14:33.013 QEMU NVMe Ctrl (12341 ): 35808 I/Os completed (+3305) 00:14:33.013 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.271 [2024-11-20 13:34:32.498426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:33.271 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:33.271 [2024-11-20 13:34:32.499835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.499905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.499934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.499962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:33.271 [2024-11-20 13:34:32.501957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.502023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.502047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.502069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.271 [2024-11-20 13:34:32.522903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:33.271 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:33.271 [2024-11-20 13:34:32.524079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.524131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.524158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.524183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:33.271 [2024-11-20 13:34:32.525919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.525962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.526015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 [2024-11-20 13:34:32.526036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:33.271 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:33.271 EAL: Scan for (pci) bus failed. 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.271 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:33.271 Attaching to 0000:00:10.0 00:14:33.271 Attached to 0000:00:10.0 00:14:33.529 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:33.529 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.529 13:34:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:33.529 Attaching to 0000:00:11.0 00:14:33.529 Attached to 0000:00:11.0 00:14:33.529 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:33.529 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:33.529 [2024-11-20 13:34:32.766961] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:45.789 13:34:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:45.789 13:34:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:45.789 13:34:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.87 00:14:45.789 13:34:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.87 00:14:45.789 13:34:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:45.789 13:34:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:14:45.789 13:34:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:14:45.789 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 13:34:44 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66963 00:14:52.344 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66963) - No such process 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66963 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67512 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67512 00:14:52.344 13:34:50 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67512 ']' 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.344 13:34:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:52.344 [2024-11-20 13:34:50.834003] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:52.344 [2024-11-20 13:34:50.834478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67512 ] 00:14:52.344 [2024-11-20 13:34:50.981490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.344 [2024-11-20 13:34:51.065306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:52.344 13:34:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:52.344 13:34:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.964 13:34:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.964 13:34:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 13:34:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:58.964 13:34:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:58.964 [2024-11-20 13:34:57.786106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:58.964 [2024-11-20 13:34:57.787523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:57.787559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:57.787573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:57.787592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:57.787600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:57.787609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:57.787617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:57.787625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:57.787632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:57.787643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:57.787650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:57.787658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:58.186103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:58.964 [2024-11-20 13:34:58.187550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:58.187581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:58.187594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:58.187611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:58.187620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:58.187627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:58.187636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:58.187643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:58.187651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 [2024-11-20 13:34:58.187658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.964 [2024-11-20 13:34:58.187666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.964 [2024-11-20 13:34:58.187673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.964 13:34:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.964 13:34:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 13:34:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.964 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:59.223 13:34:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.417 13:35:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:11.417 13:35:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:11.417 [2024-11-20 13:35:10.686282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:11.417 [2024-11-20 13:35:10.687654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.417 [2024-11-20 13:35:10.687688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.417 [2024-11-20 13:35:10.687700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.417 [2024-11-20 13:35:10.687718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.417 [2024-11-20 13:35:10.687726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.417 [2024-11-20 13:35:10.687736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.417 [2024-11-20 13:35:10.687743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.417 [2024-11-20 13:35:10.687752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.417 [2024-11-20 13:35:10.687759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.417 [2024-11-20 13:35:10.687768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.417 [2024-11-20 13:35:10.687775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.417 [2024-11-20 13:35:10.687783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:11.984 [2024-11-20 13:35:11.186282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:11.984 13:35:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.984 13:35:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.984 [2024-11-20 13:35:11.187653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.984 [2024-11-20 13:35:11.187682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.984 [2024-11-20 13:35:11.187696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.984 [2024-11-20 13:35:11.187713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.984 [2024-11-20 13:35:11.187727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.984 [2024-11-20 13:35:11.187735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.984 [2024-11-20 13:35:11.187745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.984 [2024-11-20 13:35:11.187752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.984 [2024-11-20 13:35:11.187761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.984 [2024-11-20 13:35:11.187768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.984 [2024-11-20 13:35:11.187776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.984 [2024-11-20 13:35:11.187783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:11.984 13:35:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:11.984 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:12.242 13:35:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.435 [2024-11-20 13:35:23.588087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:24.435 [2024-11-20 13:35:23.589544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.435 [2024-11-20 13:35:23.589578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.435 [2024-11-20 13:35:23.589590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.435 [2024-11-20 13:35:23.589610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.435 [2024-11-20 13:35:23.589618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.435 [2024-11-20 13:35:23.589629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.435 [2024-11-20 13:35:23.589637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.435 [2024-11-20 13:35:23.589646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.435 [2024-11-20 13:35:23.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.435 [2024-11-20 13:35:23.589662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.435 [2024-11-20 13:35:23.589669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.435 [2024-11-20 13:35:23.589677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.435 13:35:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:24.435 13:35:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:24.693 [2024-11-20 13:35:23.988093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:24.693 [2024-11-20 13:35:23.989491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.693 [2024-11-20 13:35:23.989523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.693 [2024-11-20 13:35:23.989536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.693 [2024-11-20 13:35:23.989553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.693 [2024-11-20 13:35:23.989562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.693 [2024-11-20 13:35:23.989570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.693 [2024-11-20 13:35:23.989579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.693 [2024-11-20 13:35:23.989586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.693 [2024-11-20 13:35:23.989597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.693 [2024-11-20 13:35:23.989604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.693 [2024-11-20 13:35:23.989613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.693 [2024-11-20 13:35:23.989619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.693 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.693 13:35:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.693 13:35:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.952 13:35:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:24.952 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:25.210 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:25.210 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:25.210 13:35:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.75 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.75 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.75 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.75 2 00:15:37.408 remove_attach_helper took 44.75s to complete (handling 2 nvme drive(s)) 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:37.408 13:35:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:37.408 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:44.071 13:35:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.071 13:35:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:44.071 13:35:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:44.071 13:35:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:44.071 [2024-11-20 13:35:42.567917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:44.071 [2024-11-20 13:35:42.569100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.071 [2024-11-20 13:35:42.569138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.071 [2024-11-20 13:35:42.569155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.071 [2024-11-20 13:35:42.569177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.071 [2024-11-20 13:35:42.569185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.071 [2024-11-20 13:35:42.569193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.071 [2024-11-20 13:35:42.569200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.071 [2024-11-20 13:35:42.569211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.071 [2024-11-20 13:35:42.569218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.071 [2024-11-20 13:35:42.569227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.071 [2024-11-20 13:35:42.569233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.071 [2024-11-20 13:35:42.569244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:44.071 13:35:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.071 13:35:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:44.071 13:35:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:44.071 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:44.071 [2024-11-20 13:35:43.167929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:44.071 [2024-11-20 13:35:43.169037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.072 [2024-11-20 13:35:43.169069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.072 [2024-11-20 13:35:43.169082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.072 [2024-11-20 13:35:43.169099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.072 [2024-11-20 13:35:43.169108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.072 [2024-11-20 13:35:43.169115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.072 [2024-11-20 13:35:43.169123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.072 [2024-11-20 13:35:43.169130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.072 [2024-11-20 13:35:43.169138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.072 [2024-11-20 13:35:43.169145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:44.072 [2024-11-20 13:35:43.169153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.072 [2024-11-20 13:35:43.169160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:44.351 13:35:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 13:35:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 13:35:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:44.351 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:44.610 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:56.927 13:35:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.927 [2024-11-20 13:35:55.968172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:56.927 13:35:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:56.927 [2024-11-20 13:35:55.969509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.927 [2024-11-20 13:35:55.969549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.927 [2024-11-20 13:35:55.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.927 [2024-11-20 13:35:55.969578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.927 [2024-11-20 13:35:55.969586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.927 [2024-11-20 13:35:55.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.927 [2024-11-20 13:35:55.969603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.927 [2024-11-20 13:35:55.969611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.927 [2024-11-20 13:35:55.969618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.927 [2024-11-20 13:35:55.969627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.927 [2024-11-20 13:35:55.969634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.927 [2024-11-20 13:35:55.969642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.185 [2024-11-20 13:35:56.368184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:57.185 [2024-11-20 13:35:56.369329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.185 [2024-11-20 13:35:56.369367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.185 [2024-11-20 13:35:56.369379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.185 [2024-11-20 13:35:56.369396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.185 [2024-11-20 13:35:56.369407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.185 [2024-11-20 13:35:56.369414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.185 [2024-11-20 13:35:56.369423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.185 [2024-11-20 13:35:56.369431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.185 [2024-11-20 13:35:56.369439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.185 [2024-11-20 13:35:56.369446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:57.185 [2024-11-20 13:35:56.369454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.185 [2024-11-20 13:35:56.369461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:57.185 13:35:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.185 13:35:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:57.185 13:35:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:57.185 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:57.443 13:35:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.653 13:36:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:09.653 13:36:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:09.653 [2024-11-20 13:36:08.868389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:09.653 [2024-11-20 13:36:08.869598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.653 [2024-11-20 13:36:08.869632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.653 [2024-11-20 13:36:08.869644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.653 [2024-11-20 13:36:08.869661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.653 [2024-11-20 13:36:08.869668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.653 [2024-11-20 13:36:08.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.653 [2024-11-20 13:36:08.869684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.653 [2024-11-20 13:36:08.869694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.653 [2024-11-20 13:36:08.869701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.653 [2024-11-20 13:36:08.869709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.653 [2024-11-20 13:36:08.869716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.654 [2024-11-20 13:36:08.869724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.912 [2024-11-20 13:36:09.268389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:09.912 [2024-11-20 13:36:09.269668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.912 [2024-11-20 13:36:09.269701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.912 [2024-11-20 13:36:09.269715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.912 [2024-11-20 13:36:09.269729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.912 [2024-11-20 13:36:09.269738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.912 [2024-11-20 13:36:09.269744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.912 [2024-11-20 13:36:09.269753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.912 [2024-11-20 13:36:09.269760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.912 [2024-11-20 13:36:09.269768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.912 [2024-11-20 13:36:09.269776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.912 [2024-11-20 13:36:09.269786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.912 [2024-11-20 13:36:09.269793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:10.170 13:36:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.170 13:36:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:10.170 13:36:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.170 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:10.429 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:10.429 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.429 13:36:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:16:22.687 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:22.687 13:36:21 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67512 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67512 ']' 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67512 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67512 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67512' 00:16:22.687 killing process with pid 67512 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67512 00:16:22.687 13:36:21 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67512 00:16:23.623 13:36:22 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.176 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.176 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.434 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.434 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.434 00:16:24.434 real 2m29.407s 00:16:24.434 user 1m51.217s 00:16:24.434 sys 0m16.772s 00:16:24.434 13:36:23 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.434 13:36:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 ************************************ 00:16:24.434 END TEST sw_hotplug 00:16:24.434 ************************************ 00:16:24.434 13:36:23 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:24.434 13:36:23 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:24.434 13:36:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.434 13:36:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.434 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:24.434 ************************************ 00:16:24.434 START TEST nvme_xnvme 00:16:24.434 ************************************ 00:16:24.434 13:36:23 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:24.434 * Looking for test storage... 00:16:24.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.434 13:36:23 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.434 13:36:23 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.434 13:36:23 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.694 13:36:23 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.694 13:36:23 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:24.694 13:36:23 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.694 13:36:23 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.694 --rc genhtml_branch_coverage=1 00:16:24.694 --rc genhtml_function_coverage=1 00:16:24.694 --rc genhtml_legend=1 00:16:24.694 --rc geninfo_all_blocks=1 00:16:24.694 --rc geninfo_unexecuted_blocks=1 00:16:24.694 00:16:24.694 ' 00:16:24.694 13:36:23 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.694 --rc genhtml_branch_coverage=1 00:16:24.694 --rc genhtml_function_coverage=1 00:16:24.694 --rc genhtml_legend=1 00:16:24.694 --rc geninfo_all_blocks=1 00:16:24.694 --rc geninfo_unexecuted_blocks=1 00:16:24.694 00:16:24.694 ' 00:16:24.694 13:36:23 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.694 --rc genhtml_branch_coverage=1 00:16:24.695 --rc genhtml_function_coverage=1 00:16:24.695 --rc genhtml_legend=1 00:16:24.695 --rc geninfo_all_blocks=1 00:16:24.695 --rc geninfo_unexecuted_blocks=1 00:16:24.695 00:16:24.695 ' 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.695 --rc genhtml_branch_coverage=1 00:16:24.695 --rc genhtml_function_coverage=1 00:16:24.695 --rc genhtml_legend=1 00:16:24.695 --rc geninfo_all_blocks=1 00:16:24.695 --rc geninfo_unexecuted_blocks=1 00:16:24.695 00:16:24.695 ' 00:16:24.695 13:36:23 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:24.695 13:36:23 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:24.695 13:36:23 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:24.695 13:36:23 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:24.695 13:36:23 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:24.695 #define SPDK_CONFIG_H 00:16:24.695 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:24.695 #define SPDK_CONFIG_APPS 1 00:16:24.695 #define SPDK_CONFIG_ARCH native 00:16:24.695 #define SPDK_CONFIG_ASAN 1 00:16:24.695 #undef SPDK_CONFIG_AVAHI 00:16:24.695 #undef SPDK_CONFIG_CET 00:16:24.695 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:24.695 #define SPDK_CONFIG_COVERAGE 1 00:16:24.695 #define SPDK_CONFIG_CROSS_PREFIX 00:16:24.695 #undef SPDK_CONFIG_CRYPTO 00:16:24.695 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:24.696 #undef SPDK_CONFIG_CUSTOMOCF 00:16:24.696 #undef SPDK_CONFIG_DAOS 00:16:24.696 #define SPDK_CONFIG_DAOS_DIR 00:16:24.696 #define SPDK_CONFIG_DEBUG 1 00:16:24.696 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:24.696 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:24.696 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:24.696 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:24.696 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:24.696 #undef SPDK_CONFIG_DPDK_UADK 00:16:24.696 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:24.696 #define SPDK_CONFIG_EXAMPLES 1 00:16:24.696 #undef SPDK_CONFIG_FC 00:16:24.696 #define SPDK_CONFIG_FC_PATH 00:16:24.696 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:24.696 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:24.696 #define SPDK_CONFIG_FSDEV 1 00:16:24.696 #undef SPDK_CONFIG_FUSE 00:16:24.696 #undef SPDK_CONFIG_FUZZER 00:16:24.696 #define SPDK_CONFIG_FUZZER_LIB 00:16:24.696 #undef SPDK_CONFIG_GOLANG 00:16:24.696 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:24.696 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:24.696 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:24.696 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:24.696 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:24.696 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:24.696 #undef SPDK_CONFIG_HAVE_LZ4 00:16:24.696 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:24.696 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:24.696 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:24.696 #define SPDK_CONFIG_IDXD 1 00:16:24.696 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:24.696 #undef SPDK_CONFIG_IPSEC_MB 00:16:24.696 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:24.696 #define SPDK_CONFIG_ISAL 1 00:16:24.696 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:24.696 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:24.696 #define SPDK_CONFIG_LIBDIR 00:16:24.696 #undef SPDK_CONFIG_LTO 00:16:24.696 #define SPDK_CONFIG_MAX_LCORES 128 00:16:24.696 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:24.696 #define SPDK_CONFIG_NVME_CUSE 1 00:16:24.696 #undef SPDK_CONFIG_OCF 00:16:24.696 #define SPDK_CONFIG_OCF_PATH 00:16:24.696 #define SPDK_CONFIG_OPENSSL_PATH 00:16:24.696 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:24.696 #define SPDK_CONFIG_PGO_DIR 00:16:24.696 #undef SPDK_CONFIG_PGO_USE 00:16:24.696 #define SPDK_CONFIG_PREFIX /usr/local 00:16:24.696 #undef SPDK_CONFIG_RAID5F 00:16:24.696 #undef SPDK_CONFIG_RBD 00:16:24.696 #define SPDK_CONFIG_RDMA 1 00:16:24.696 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:24.696 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:24.696 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:24.696 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:24.696 #define SPDK_CONFIG_SHARED 1 00:16:24.696 #undef SPDK_CONFIG_SMA 00:16:24.696 #define SPDK_CONFIG_TESTS 1 00:16:24.696 #undef SPDK_CONFIG_TSAN 00:16:24.696 #define SPDK_CONFIG_UBLK 1 00:16:24.696 #define SPDK_CONFIG_UBSAN 1 00:16:24.696 #undef SPDK_CONFIG_UNIT_TESTS 00:16:24.696 #undef SPDK_CONFIG_URING 00:16:24.696 #define SPDK_CONFIG_URING_PATH 00:16:24.696 #undef SPDK_CONFIG_URING_ZNS 00:16:24.696 #undef SPDK_CONFIG_USDT 00:16:24.696 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:24.696 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:24.696 #undef SPDK_CONFIG_VFIO_USER 00:16:24.696 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:24.696 #define SPDK_CONFIG_VHOST 1 00:16:24.696 #define SPDK_CONFIG_VIRTIO 1 00:16:24.696 #undef SPDK_CONFIG_VTUNE 00:16:24.696 #define SPDK_CONFIG_VTUNE_DIR 00:16:24.696 #define SPDK_CONFIG_WERROR 1 00:16:24.696 #define SPDK_CONFIG_WPDK_DIR 00:16:24.696 #define SPDK_CONFIG_XNVME 1 00:16:24.696 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:24.696 13:36:23 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.696 13:36:23 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.696 13:36:23 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.696 13:36:23 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.696 13:36:23 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.696 13:36:23 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.696 13:36:23 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.696 13:36:23 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.696 13:36:23 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:24.696 13:36:23 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:24.696 13:36:23 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:24.696 13:36:23 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:24.697 13:36:23 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68867 ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68867 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Jcxo8r 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Jcxo8r/tests/xnvme /tmp/spdk.Jcxo8r 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977325568 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590429696 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977325568 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590429696 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=90802864128 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8899915776 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:24.698 * Looking for test storage... 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13977325568 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.698 13:36:23 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.699 13:36:23 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.699 --rc genhtml_branch_coverage=1 00:16:24.699 --rc genhtml_function_coverage=1 00:16:24.699 --rc genhtml_legend=1 00:16:24.699 --rc geninfo_all_blocks=1 00:16:24.699 --rc geninfo_unexecuted_blocks=1 00:16:24.699 00:16:24.699 ' 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.699 --rc genhtml_branch_coverage=1 00:16:24.699 --rc genhtml_function_coverage=1 00:16:24.699 --rc genhtml_legend=1 00:16:24.699 --rc geninfo_all_blocks=1 00:16:24.699 --rc geninfo_unexecuted_blocks=1 00:16:24.699 00:16:24.699 ' 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.699 --rc genhtml_branch_coverage=1 00:16:24.699 --rc genhtml_function_coverage=1 00:16:24.699 --rc genhtml_legend=1 00:16:24.699 --rc geninfo_all_blocks=1 00:16:24.699 --rc geninfo_unexecuted_blocks=1 00:16:24.699 00:16:24.699 ' 00:16:24.699 13:36:24 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.699 --rc genhtml_branch_coverage=1 00:16:24.699 --rc genhtml_function_coverage=1 00:16:24.699 --rc genhtml_legend=1 00:16:24.699 --rc geninfo_all_blocks=1 00:16:24.699 --rc geninfo_unexecuted_blocks=1 00:16:24.699 00:16:24.699 ' 00:16:24.699 13:36:24 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.699 13:36:24 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.699 13:36:24 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.699 13:36:24 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.699 13:36:24 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.699 13:36:24 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:24.699 13:36:24 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:24.699 13:36:24 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:24.700 13:36:24 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:24.700 13:36:24 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:24.700 13:36:24 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:24.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:25.214 Waiting for block devices as requested 00:16:25.214 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.214 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.214 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.472 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.734 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:30.734 13:36:29 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:30.734 13:36:30 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:30.734 13:36:30 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:30.993 13:36:30 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:30.993 No valid GPT data, bailing 00:16:30.993 13:36:30 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:30.993 13:36:30 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:30.993 13:36:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:30.993 13:36:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:30.993 13:36:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.993 13:36:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.993 ************************************ 00:16:30.993 START TEST xnvme_rpc 00:16:30.993 ************************************ 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69252 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69252 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69252 ']' 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.993 13:36:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:30.993 [2024-11-20 13:36:30.408735] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:30.993 [2024-11-20 13:36:30.408866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69252 ] 00:16:31.276 [2024-11-20 13:36:30.568180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.276 [2024-11-20 13:36:30.671151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 xnvme_bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69252 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69252 ']' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69252 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69252 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.228 killing process with pid 69252 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69252' 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69252 00:16:32.228 13:36:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69252 00:16:33.603 00:16:33.603 real 0m2.657s 00:16:33.603 user 0m2.760s 00:16:33.603 sys 0m0.381s 00:16:33.603 13:36:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.603 13:36:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 ************************************ 00:16:33.603 END TEST xnvme_rpc 00:16:33.603 ************************************ 00:16:33.603 13:36:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:33.603 13:36:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:33.603 13:36:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.603 13:36:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 ************************************ 00:16:33.603 START TEST xnvme_bdevperf 00:16:33.603 ************************************ 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:33.603 13:36:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:33.861 { 00:16:33.861 "subsystems": [ 00:16:33.861 { 00:16:33.861 "subsystem": "bdev", 00:16:33.861 "config": [ 00:16:33.861 { 00:16:33.861 "params": { 00:16:33.861 "io_mechanism": "libaio", 00:16:33.861 "conserve_cpu": false, 00:16:33.861 "filename": "/dev/nvme0n1", 00:16:33.861 "name": "xnvme_bdev" 00:16:33.861 }, 00:16:33.861 "method": "bdev_xnvme_create" 00:16:33.861 }, 00:16:33.861 { 00:16:33.861 "method": "bdev_wait_for_examine" 00:16:33.861 } 00:16:33.861 ] 00:16:33.861 } 00:16:33.861 ] 00:16:33.861 } 00:16:33.861 [2024-11-20 13:36:33.090200] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:33.861 [2024-11-20 13:36:33.090325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69321 ] 00:16:33.861 [2024-11-20 13:36:33.243421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.119 [2024-11-20 13:36:33.360994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.377 Running I/O for 5 seconds... 00:16:36.244 38715.00 IOPS, 151.23 MiB/s [2024-11-20T13:36:37.044Z] 38754.00 IOPS, 151.38 MiB/s [2024-11-20T13:36:37.997Z] 38400.67 IOPS, 150.00 MiB/s [2024-11-20T13:36:38.930Z] 38565.25 IOPS, 150.65 MiB/s [2024-11-20T13:36:38.930Z] 38375.40 IOPS, 149.90 MiB/s 00:16:39.503 Latency(us) 00:16:39.503 [2024-11-20T13:36:38.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.503 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:39.503 xnvme_bdev : 5.00 38350.75 149.81 0.00 0.00 1664.58 379.67 7713.08 00:16:39.503 [2024-11-20T13:36:38.930Z] =================================================================================================================== 00:16:39.503 [2024-11-20T13:36:38.930Z] Total : 38350.75 149.81 0.00 0.00 1664.58 379.67 7713.08 00:16:40.074 13:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:40.074 13:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:40.074 13:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:40.074 13:36:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:40.074 13:36:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:40.074 { 00:16:40.074 "subsystems": [ 00:16:40.074 { 00:16:40.074 "subsystem": "bdev", 00:16:40.074 "config": [ 00:16:40.074 { 00:16:40.074 "params": { 00:16:40.074 "io_mechanism": "libaio", 00:16:40.074 "conserve_cpu": false, 00:16:40.074 "filename": "/dev/nvme0n1", 00:16:40.074 "name": "xnvme_bdev" 00:16:40.074 }, 00:16:40.074 "method": "bdev_xnvme_create" 00:16:40.074 }, 00:16:40.074 { 00:16:40.074 "method": "bdev_wait_for_examine" 00:16:40.074 } 00:16:40.074 ] 00:16:40.074 } 00:16:40.074 ] 00:16:40.074 } 00:16:40.074 [2024-11-20 13:36:39.459105] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:40.074 [2024-11-20 13:36:39.459236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69396 ] 00:16:40.332 [2024-11-20 13:36:39.611437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.332 [2024-11-20 13:36:39.717372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.590 Running I/O for 5 seconds... 00:16:42.904 35819.00 IOPS, 139.92 MiB/s [2024-11-20T13:36:43.303Z] 36103.50 IOPS, 141.03 MiB/s [2024-11-20T13:36:44.253Z] 35750.67 IOPS, 139.65 MiB/s [2024-11-20T13:36:45.190Z] 36460.75 IOPS, 142.42 MiB/s 00:16:45.763 Latency(us) 00:16:45.763 [2024-11-20T13:36:45.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.763 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:45.763 xnvme_bdev : 5.00 37024.45 144.63 0.00 0.00 1724.00 225.28 7410.61 00:16:45.763 [2024-11-20T13:36:45.190Z] =================================================================================================================== 00:16:45.763 [2024-11-20T13:36:45.190Z] Total : 37024.45 144.63 0.00 0.00 1724.00 225.28 7410.61 00:16:46.329 00:16:46.329 real 0m12.574s 00:16:46.329 user 0m4.511s 00:16:46.329 sys 0m5.611s 00:16:46.329 13:36:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.329 13:36:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:46.329 ************************************ 00:16:46.329 END TEST xnvme_bdevperf 00:16:46.329 ************************************ 00:16:46.329 13:36:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:46.329 13:36:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.329 13:36:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.329 13:36:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.329 ************************************ 00:16:46.329 START TEST xnvme_fio_plugin 00:16:46.329 ************************************ 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:46.329 13:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:46.329 { 00:16:46.329 "subsystems": [ 00:16:46.329 { 00:16:46.329 "subsystem": "bdev", 00:16:46.329 "config": [ 00:16:46.329 { 00:16:46.329 "params": { 00:16:46.329 "io_mechanism": "libaio", 00:16:46.329 "conserve_cpu": false, 00:16:46.329 "filename": "/dev/nvme0n1", 00:16:46.329 "name": "xnvme_bdev" 00:16:46.329 }, 00:16:46.329 "method": "bdev_xnvme_create" 00:16:46.329 }, 00:16:46.329 { 00:16:46.329 "method": "bdev_wait_for_examine" 00:16:46.329 } 00:16:46.329 ] 00:16:46.329 } 00:16:46.329 ] 00:16:46.329 } 00:16:46.587 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:46.587 fio-3.35 00:16:46.587 Starting 1 thread 00:16:53.178 00:16:53.178 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69512: Wed Nov 20 13:36:51 2024 00:16:53.178 read: IOPS=44.4k, BW=173MiB/s (182MB/s)(867MiB/5001msec) 00:16:53.178 slat (usec): min=3, max=1648, avg=18.55, stdev=44.09 00:16:53.178 clat (usec): min=75, max=11520, avg=893.66, stdev=537.92 00:16:53.178 lat (usec): min=131, max=11524, avg=912.21, stdev=539.56 00:16:53.178 clat percentiles (usec): 00:16:53.178 | 1.00th=[ 178], 5.00th=[ 255], 10.00th=[ 330], 20.00th=[ 453], 00:16:53.178 | 30.00th=[ 562], 40.00th=[ 676], 50.00th=[ 791], 60.00th=[ 914], 00:16:53.178 | 70.00th=[ 1057], 80.00th=[ 1237], 90.00th=[ 1582], 95.00th=[ 1926], 00:16:53.178 | 99.00th=[ 2737], 99.50th=[ 3064], 99.90th=[ 3752], 99.95th=[ 4047], 00:16:53.178 | 99.99th=[ 4883] 00:16:53.178 bw ( KiB/s): min=151720, max=206760, per=100.00%, avg=178424.00, stdev=18458.90, samples=9 00:16:53.178 iops : min=37930, max=51690, avg=44606.00, stdev=4614.73, samples=9 00:16:53.178 lat (usec) : 100=0.01%, 250=4.73%, 500=19.63%, 750=22.07%, 1000=20.13% 00:16:53.178 lat (msec) : 2=29.15%, 4=4.23%, 10=0.06%, 20=0.01% 00:16:53.178 cpu : usr=31.92%, sys=52.04%, ctx=41, majf=0, minf=764 00:16:53.178 IO depths : 1=0.2%, 2=1.2%, 4=4.1%, 8=10.6%, 16=24.8%, 32=57.1%, >=64=1.9% 00:16:53.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.178 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:53.178 issued rwts: total=221983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:53.178 00:16:53.178 Run status group 0 (all jobs): 00:16:53.178 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=867MiB (909MB), run=5001-5001msec 00:16:53.178 ----------------------------------------------------- 00:16:53.178 Suppressions used: 00:16:53.178 count bytes template 00:16:53.178 1 11 /usr/src/fio/parse.c 00:16:53.178 1 8 libtcmalloc_minimal.so 00:16:53.178 1 904 libcrypto.so 00:16:53.178 ----------------------------------------------------- 00:16:53.178 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:53.178 13:36:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:53.178 { 00:16:53.178 "subsystems": [ 00:16:53.178 { 00:16:53.178 "subsystem": "bdev", 00:16:53.178 "config": [ 00:16:53.178 { 00:16:53.178 "params": { 00:16:53.178 "io_mechanism": "libaio", 00:16:53.178 "conserve_cpu": false, 00:16:53.178 "filename": "/dev/nvme0n1", 00:16:53.178 "name": "xnvme_bdev" 00:16:53.178 }, 00:16:53.178 "method": "bdev_xnvme_create" 00:16:53.178 }, 00:16:53.178 { 00:16:53.178 "method": "bdev_wait_for_examine" 00:16:53.178 } 00:16:53.178 ] 00:16:53.178 } 00:16:53.178 ] 00:16:53.178 } 00:16:53.178 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:53.178 fio-3.35 00:16:53.178 Starting 1 thread 00:16:59.761 00:16:59.761 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69609: Wed Nov 20 13:36:58 2024 00:16:59.761 write: IOPS=35.7k, BW=139MiB/s (146MB/s)(697MiB/5001msec); 0 zone resets 00:16:59.761 slat (usec): min=3, max=1831, avg=20.27, stdev=80.18 00:16:59.761 clat (usec): min=81, max=31598, avg=1237.88, stdev=868.99 00:16:59.761 lat (usec): min=157, max=31603, avg=1258.15, stdev=865.74 00:16:59.761 clat percentiles (usec): 00:16:59.761 | 1.00th=[ 237], 5.00th=[ 388], 10.00th=[ 529], 20.00th=[ 734], 00:16:59.761 | 30.00th=[ 898], 40.00th=[ 1037], 50.00th=[ 1172], 60.00th=[ 1319], 00:16:59.761 | 70.00th=[ 1467], 80.00th=[ 1663], 90.00th=[ 1926], 95.00th=[ 2180], 00:16:59.761 | 99.00th=[ 2933], 99.50th=[ 3294], 99.90th=[ 5276], 99.95th=[23987], 00:16:59.761 | 99.99th=[28967] 00:16:59.761 bw ( KiB/s): min=127488, max=167768, per=100.00%, avg=143843.56, stdev=13977.58, samples=9 00:16:59.761 iops : min=31872, max=41942, avg=35960.89, stdev=3494.40, samples=9 00:16:59.761 lat (usec) : 100=0.01%, 250=1.23%, 500=7.69%, 750=12.10%, 1000=16.15% 00:16:59.761 lat (msec) : 2=55.01%, 4=7.67%, 10=0.09%, 50=0.07% 00:16:59.761 cpu : usr=40.82%, sys=48.92%, ctx=12, majf=0, minf=765 00:16:59.761 IO depths : 1=0.4%, 2=1.1%, 4=3.2%, 8=8.8%, 16=23.3%, 32=61.1%, >=64=2.0% 00:16:59.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.761 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:16:59.761 issued rwts: total=0,178501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:59.761 00:16:59.761 Run status group 0 (all jobs): 00:16:59.761 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=697MiB (731MB), run=5001-5001msec 00:17:00.021 ----------------------------------------------------- 00:17:00.021 Suppressions used: 00:17:00.021 count bytes template 00:17:00.021 1 11 /usr/src/fio/parse.c 00:17:00.021 1 8 libtcmalloc_minimal.so 00:17:00.021 1 904 libcrypto.so 00:17:00.021 ----------------------------------------------------- 00:17:00.021 00:17:00.021 00:17:00.021 real 0m13.676s 00:17:00.021 user 0m6.347s 00:17:00.021 sys 0m5.612s 00:17:00.021 13:36:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.021 ************************************ 00:17:00.021 END TEST xnvme_fio_plugin 00:17:00.021 ************************************ 00:17:00.021 13:36:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 13:36:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:00.021 13:36:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:00.021 13:36:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:00.021 13:36:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:00.021 13:36:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.021 13:36:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.021 13:36:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 ************************************ 00:17:00.021 START TEST xnvme_rpc 00:17:00.021 ************************************ 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69694 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69694 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69694 ']' 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.021 13:36:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 [2024-11-20 13:36:59.479208] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:00.283 [2024-11-20 13:36:59.479360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69694 ] 00:17:00.283 [2024-11-20 13:36:59.649245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.546 [2024-11-20 13:36:59.808759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 xnvme_bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.491 13:37:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69694 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69694 ']' 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69694 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69694 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69694' 00:17:01.492 killing process with pid 69694 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69694 00:17:01.492 13:37:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69694 00:17:03.407 00:17:03.407 real 0m3.106s 00:17:03.407 user 0m3.043s 00:17:03.407 sys 0m0.533s 00:17:03.407 13:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.407 13:37:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.407 ************************************ 00:17:03.407 END TEST xnvme_rpc 00:17:03.407 ************************************ 00:17:03.407 13:37:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:03.407 13:37:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.407 13:37:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.407 13:37:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.407 ************************************ 00:17:03.407 START TEST xnvme_bdevperf 00:17:03.407 ************************************ 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:03.407 13:37:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:03.407 { 00:17:03.407 "subsystems": [ 00:17:03.407 { 00:17:03.407 "subsystem": "bdev", 00:17:03.407 "config": [ 00:17:03.407 { 00:17:03.407 "params": { 00:17:03.407 "io_mechanism": "libaio", 00:17:03.407 "conserve_cpu": true, 00:17:03.407 "filename": "/dev/nvme0n1", 00:17:03.407 "name": "xnvme_bdev" 00:17:03.407 }, 00:17:03.407 "method": "bdev_xnvme_create" 00:17:03.407 }, 00:17:03.407 { 00:17:03.407 "method": "bdev_wait_for_examine" 00:17:03.407 } 00:17:03.407 ] 00:17:03.407 } 00:17:03.407 ] 00:17:03.407 } 00:17:03.407 [2024-11-20 13:37:02.636719] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:03.407 [2024-11-20 13:37:02.636894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69764 ] 00:17:03.407 [2024-11-20 13:37:02.803257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.668 [2024-11-20 13:37:02.941793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.927 Running I/O for 5 seconds... 00:17:06.266 30276.00 IOPS, 118.27 MiB/s [2024-11-20T13:37:06.636Z] 29543.50 IOPS, 115.40 MiB/s [2024-11-20T13:37:07.578Z] 29330.33 IOPS, 114.57 MiB/s [2024-11-20T13:37:08.546Z] 30328.00 IOPS, 118.47 MiB/s 00:17:09.119 Latency(us) 00:17:09.119 [2024-11-20T13:37:08.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.119 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:09.119 xnvme_bdev : 5.00 29949.61 116.99 0.00 0.00 2131.99 280.42 8217.21 00:17:09.119 [2024-11-20T13:37:08.546Z] =================================================================================================================== 00:17:09.119 [2024-11-20T13:37:08.546Z] Total : 29949.61 116.99 0.00 0.00 2131.99 280.42 8217.21 00:17:09.691 13:37:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.691 13:37:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:09.691 13:37:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:09.957 13:37:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:09.957 13:37:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.957 { 00:17:09.957 "subsystems": [ 00:17:09.957 { 00:17:09.957 "subsystem": "bdev", 00:17:09.957 "config": [ 00:17:09.957 { 00:17:09.957 "params": { 00:17:09.957 "io_mechanism": "libaio", 00:17:09.957 "conserve_cpu": true, 00:17:09.957 "filename": "/dev/nvme0n1", 00:17:09.957 "name": "xnvme_bdev" 00:17:09.957 }, 00:17:09.957 "method": "bdev_xnvme_create" 00:17:09.957 }, 00:17:09.957 { 00:17:09.957 "method": "bdev_wait_for_examine" 00:17:09.957 } 00:17:09.957 ] 00:17:09.957 } 00:17:09.957 ] 00:17:09.957 } 00:17:09.957 [2024-11-20 13:37:09.193014] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:09.957 [2024-11-20 13:37:09.193158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69839 ] 00:17:09.957 [2024-11-20 13:37:09.357252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.219 [2024-11-20 13:37:09.499028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.480 Running I/O for 5 seconds... 00:17:12.472 30537.00 IOPS, 119.29 MiB/s [2024-11-20T13:37:12.845Z] 30209.50 IOPS, 118.01 MiB/s [2024-11-20T13:37:14.242Z] 21231.00 IOPS, 82.93 MiB/s [2024-11-20T13:37:14.857Z] 16610.50 IOPS, 64.88 MiB/s [2024-11-20T13:37:14.857Z] 13895.60 IOPS, 54.28 MiB/s 00:17:15.430 Latency(us) 00:17:15.430 [2024-11-20T13:37:14.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.430 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:15.430 xnvme_bdev : 5.02 13851.72 54.11 0.00 0.00 4605.60 50.41 190356.87 00:17:15.430 [2024-11-20T13:37:14.857Z] =================================================================================================================== 00:17:15.430 [2024-11-20T13:37:14.857Z] Total : 13851.72 54.11 0.00 0.00 4605.60 50.41 190356.87 00:17:16.375 00:17:16.375 real 0m13.094s 00:17:16.375 user 0m7.209s 00:17:16.375 sys 0m4.640s 00:17:16.375 13:37:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.375 13:37:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:16.375 ************************************ 00:17:16.375 END TEST xnvme_bdevperf 00:17:16.375 ************************************ 00:17:16.375 13:37:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:16.375 13:37:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.375 13:37:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.375 13:37:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.375 ************************************ 00:17:16.375 START TEST xnvme_fio_plugin 00:17:16.375 ************************************ 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:16.375 13:37:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.375 { 00:17:16.375 "subsystems": [ 00:17:16.375 { 00:17:16.375 "subsystem": "bdev", 00:17:16.375 "config": [ 00:17:16.375 { 00:17:16.375 "params": { 00:17:16.375 "io_mechanism": "libaio", 00:17:16.375 "conserve_cpu": true, 00:17:16.375 "filename": "/dev/nvme0n1", 00:17:16.375 "name": "xnvme_bdev" 00:17:16.375 }, 00:17:16.375 "method": "bdev_xnvme_create" 00:17:16.375 }, 00:17:16.375 { 00:17:16.375 "method": "bdev_wait_for_examine" 00:17:16.375 } 00:17:16.375 ] 00:17:16.375 } 00:17:16.375 ] 00:17:16.375 } 00:17:16.636 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:16.636 fio-3.35 00:17:16.636 Starting 1 thread 00:17:23.203 00:17:23.203 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69959: Wed Nov 20 13:37:21 2024 00:17:23.203 read: IOPS=42.0k, BW=164MiB/s (172MB/s)(821MiB/5001msec) 00:17:23.203 slat (usec): min=4, max=2018, avg=19.09, stdev=48.88 00:17:23.203 clat (usec): min=86, max=6994, avg=955.95, stdev=551.15 00:17:23.203 lat (usec): min=93, max=7034, avg=975.03, stdev=552.18 00:17:23.203 clat percentiles (usec): 00:17:23.203 | 1.00th=[ 174], 5.00th=[ 260], 10.00th=[ 347], 20.00th=[ 498], 00:17:23.203 | 30.00th=[ 635], 40.00th=[ 758], 50.00th=[ 873], 60.00th=[ 996], 00:17:23.203 | 70.00th=[ 1139], 80.00th=[ 1319], 90.00th=[ 1631], 95.00th=[ 1958], 00:17:23.203 | 99.00th=[ 2868], 99.50th=[ 3195], 99.90th=[ 3818], 99.95th=[ 4080], 00:17:23.203 | 99.99th=[ 5211] 00:17:23.203 bw ( KiB/s): min=137840, max=184480, per=99.22%, avg=166885.33, stdev=16687.37, samples=9 00:17:23.203 iops : min=34460, max=46120, avg=41721.33, stdev=4171.84, samples=9 00:17:23.203 lat (usec) : 100=0.01%, 250=4.56%, 500=15.49%, 750=19.61%, 1000=20.91% 00:17:23.203 lat (msec) : 2=34.85%, 4=4.52%, 10=0.06% 00:17:23.203 cpu : usr=33.90%, sys=49.36%, ctx=105, majf=0, minf=764 00:17:23.203 IO depths : 1=0.3%, 2=1.6%, 4=4.5%, 8=10.9%, 16=24.8%, 32=56.0%, >=64=1.9% 00:17:23.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.203 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:23.203 issued rwts: total=210280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:23.203 00:17:23.203 Run status group 0 (all jobs): 00:17:23.203 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=821MiB (861MB), run=5001-5001msec 00:17:23.461 ----------------------------------------------------- 00:17:23.461 Suppressions used: 00:17:23.461 count bytes template 00:17:23.461 1 11 /usr/src/fio/parse.c 00:17:23.461 1 8 libtcmalloc_minimal.so 00:17:23.461 1 904 libcrypto.so 00:17:23.461 ----------------------------------------------------- 00:17:23.461 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.461 13:37:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.461 { 00:17:23.461 "subsystems": [ 00:17:23.461 { 00:17:23.461 "subsystem": "bdev", 00:17:23.461 "config": [ 00:17:23.461 { 00:17:23.461 "params": { 00:17:23.461 "io_mechanism": "libaio", 00:17:23.461 "conserve_cpu": true, 00:17:23.461 "filename": "/dev/nvme0n1", 00:17:23.461 "name": "xnvme_bdev" 00:17:23.461 }, 00:17:23.461 "method": "bdev_xnvme_create" 00:17:23.461 }, 00:17:23.461 { 00:17:23.461 "method": "bdev_wait_for_examine" 00:17:23.461 } 00:17:23.461 ] 00:17:23.461 } 00:17:23.461 ] 00:17:23.461 } 00:17:23.461 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.461 fio-3.35 00:17:23.461 Starting 1 thread 00:17:30.035 00:17:30.035 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70050: Wed Nov 20 13:37:28 2024 00:17:30.035 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(750MiB/5009msec); 0 zone resets 00:17:30.035 slat (usec): min=4, max=1602, avg=19.35, stdev=48.47 00:17:30.035 clat (usec): min=9, max=19708, avg=1118.10, stdev=1475.10 00:17:30.035 lat (usec): min=45, max=19713, avg=1137.45, stdev=1474.37 00:17:30.035 clat percentiles (usec): 00:17:30.035 | 1.00th=[ 161], 5.00th=[ 251], 10.00th=[ 334], 20.00th=[ 494], 00:17:30.035 | 30.00th=[ 627], 40.00th=[ 742], 50.00th=[ 857], 60.00th=[ 971], 00:17:30.035 | 70.00th=[ 1123], 80.00th=[ 1319], 90.00th=[ 1696], 95.00th=[ 2212], 00:17:30.035 | 99.00th=[10814], 99.50th=[12256], 99.90th=[13960], 99.95th=[14615], 00:17:30.035 | 99.99th=[16909] 00:17:30.035 bw ( KiB/s): min=79328, max=187472, per=100.00%, avg=153460.80, stdev=32201.80, samples=10 00:17:30.035 iops : min=19832, max=46868, avg=38365.20, stdev=8050.45, samples=10 00:17:30.035 lat (usec) : 10=0.01%, 20=0.01%, 50=0.04%, 100=0.15%, 250=4.73% 00:17:30.035 lat (usec) : 500=15.49%, 750=20.10%, 1000=21.49% 00:17:30.035 lat (msec) : 2=31.56%, 4=4.47%, 10=0.70%, 20=1.27% 00:17:30.035 cpu : usr=38.40%, sys=46.77%, ctx=34, majf=0, minf=765 00:17:30.035 IO depths : 1=0.2%, 2=1.3%, 4=4.0%, 8=10.2%, 16=23.3%, 32=58.3%, >=64=2.8% 00:17:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.035 complete : 0=0.0%, 4=97.8%, 8=0.2%, 16=0.2%, 32=0.2%, 64=1.5%, >=64=0.0% 00:17:30.035 issued rwts: total=0,191873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:30.035 00:17:30.035 Run status group 0 (all jobs): 00:17:30.035 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=750MiB (786MB), run=5009-5009msec 00:17:30.035 ----------------------------------------------------- 00:17:30.035 Suppressions used: 00:17:30.035 count bytes template 00:17:30.035 1 11 /usr/src/fio/parse.c 00:17:30.035 1 8 libtcmalloc_minimal.so 00:17:30.035 1 904 libcrypto.so 00:17:30.035 ----------------------------------------------------- 00:17:30.035 00:17:30.035 00:17:30.035 real 0m13.666s 00:17:30.035 user 0m6.315s 00:17:30.035 sys 0m5.375s 00:17:30.035 13:37:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.035 ************************************ 00:17:30.035 END TEST xnvme_fio_plugin 00:17:30.035 ************************************ 00:17:30.035 13:37:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:30.035 13:37:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:30.035 13:37:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.035 13:37:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.035 13:37:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.035 ************************************ 00:17:30.035 START TEST xnvme_rpc 00:17:30.035 ************************************ 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70137 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70137 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70137 ']' 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.035 13:37:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.294 [2024-11-20 13:37:29.514326] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:30.294 [2024-11-20 13:37:29.514456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70137 ] 00:17:30.294 [2024-11-20 13:37:29.673575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.554 [2024-11-20 13:37:29.774926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 xnvme_bdev 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:31.121 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70137 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70137 ']' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70137 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70137 00:17:31.122 killing process with pid 70137 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70137' 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70137 00:17:31.122 13:37:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70137 00:17:33.054 00:17:33.054 real 0m2.719s 00:17:33.054 user 0m2.787s 00:17:33.054 sys 0m0.386s 00:17:33.054 13:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.054 ************************************ 00:17:33.054 END TEST xnvme_rpc 00:17:33.054 ************************************ 00:17:33.054 13:37:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.054 13:37:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:33.054 13:37:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:33.054 13:37:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.054 13:37:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.054 ************************************ 00:17:33.054 START TEST xnvme_bdevperf 00:17:33.054 ************************************ 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:33.054 13:37:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:33.054 { 00:17:33.054 "subsystems": [ 00:17:33.054 { 00:17:33.054 "subsystem": "bdev", 00:17:33.054 "config": [ 00:17:33.054 { 00:17:33.054 "params": { 00:17:33.054 "io_mechanism": "io_uring", 00:17:33.054 "conserve_cpu": false, 00:17:33.054 "filename": "/dev/nvme0n1", 00:17:33.054 "name": "xnvme_bdev" 00:17:33.054 }, 00:17:33.054 "method": "bdev_xnvme_create" 00:17:33.054 }, 00:17:33.054 { 00:17:33.054 "method": "bdev_wait_for_examine" 00:17:33.054 } 00:17:33.054 ] 00:17:33.054 } 00:17:33.054 ] 00:17:33.054 } 00:17:33.054 [2024-11-20 13:37:32.307531] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:33.054 [2024-11-20 13:37:32.307688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:17:33.054 [2024-11-20 13:37:32.474127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.316 [2024-11-20 13:37:32.614624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.577 Running I/O for 5 seconds... 00:17:35.907 31786.00 IOPS, 124.16 MiB/s [2024-11-20T13:37:36.276Z] 33057.50 IOPS, 129.13 MiB/s [2024-11-20T13:37:37.288Z] 33269.33 IOPS, 129.96 MiB/s [2024-11-20T13:37:38.249Z] 32652.00 IOPS, 127.55 MiB/s [2024-11-20T13:37:38.249Z] 32327.40 IOPS, 126.28 MiB/s 00:17:38.822 Latency(us) 00:17:38.822 [2024-11-20T13:37:38.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.822 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:38.822 xnvme_bdev : 5.01 32296.40 126.16 0.00 0.00 1976.76 206.38 13712.15 00:17:38.822 [2024-11-20T13:37:38.249Z] =================================================================================================================== 00:17:38.822 [2024-11-20T13:37:38.249Z] Total : 32296.40 126.16 0.00 0.00 1976.76 206.38 13712.15 00:17:39.395 13:37:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:39.395 13:37:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:39.395 13:37:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:39.395 13:37:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.395 13:37:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.657 { 00:17:39.657 "subsystems": [ 00:17:39.657 { 00:17:39.657 "subsystem": "bdev", 00:17:39.657 "config": [ 00:17:39.657 { 00:17:39.657 "params": { 00:17:39.657 "io_mechanism": "io_uring", 00:17:39.657 "conserve_cpu": false, 00:17:39.657 "filename": "/dev/nvme0n1", 00:17:39.657 "name": "xnvme_bdev" 00:17:39.657 }, 00:17:39.657 "method": "bdev_xnvme_create" 00:17:39.657 }, 00:17:39.657 { 00:17:39.657 "method": "bdev_wait_for_examine" 00:17:39.657 } 00:17:39.657 ] 00:17:39.657 } 00:17:39.657 ] 00:17:39.657 } 00:17:39.657 [2024-11-20 13:37:38.897886] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:39.657 [2024-11-20 13:37:38.898100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70281 ] 00:17:39.657 [2024-11-20 13:37:39.060865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.917 [2024-11-20 13:37:39.189540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.177 Running I/O for 5 seconds... 00:17:42.059 4894.00 IOPS, 19.12 MiB/s [2024-11-20T13:37:42.868Z] 4810.50 IOPS, 18.79 MiB/s [2024-11-20T13:37:43.840Z] 4916.00 IOPS, 19.20 MiB/s [2024-11-20T13:37:44.786Z] 4944.00 IOPS, 19.31 MiB/s [2024-11-20T13:37:44.786Z] 4961.40 IOPS, 19.38 MiB/s 00:17:45.359 Latency(us) 00:17:45.359 [2024-11-20T13:37:44.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.359 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:45.359 xnvme_bdev : 5.01 4959.91 19.37 0.00 0.00 12879.55 66.17 93161.94 00:17:45.359 [2024-11-20T13:37:44.786Z] =================================================================================================================== 00:17:45.359 [2024-11-20T13:37:44.786Z] Total : 4959.91 19.37 0.00 0.00 12879.55 66.17 93161.94 00:17:45.927 00:17:45.927 real 0m13.086s 00:17:45.927 user 0m6.030s 00:17:45.927 sys 0m6.762s 00:17:45.927 13:37:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.927 ************************************ 00:17:45.927 END TEST xnvme_bdevperf 00:17:45.927 ************************************ 00:17:45.927 13:37:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 13:37:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:46.189 13:37:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.189 13:37:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.189 13:37:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 ************************************ 00:17:46.189 START TEST xnvme_fio_plugin 00:17:46.189 ************************************ 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.189 13:37:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.189 { 00:17:46.189 "subsystems": [ 00:17:46.189 { 00:17:46.189 "subsystem": "bdev", 00:17:46.189 "config": [ 00:17:46.189 { 00:17:46.189 "params": { 00:17:46.189 "io_mechanism": "io_uring", 00:17:46.189 "conserve_cpu": false, 00:17:46.189 "filename": "/dev/nvme0n1", 00:17:46.189 "name": "xnvme_bdev" 00:17:46.189 }, 00:17:46.189 "method": "bdev_xnvme_create" 00:17:46.189 }, 00:17:46.189 { 00:17:46.189 "method": "bdev_wait_for_examine" 00:17:46.189 } 00:17:46.189 ] 00:17:46.189 } 00:17:46.189 ] 00:17:46.189 } 00:17:46.189 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:46.189 fio-3.35 00:17:46.189 Starting 1 thread 00:17:52.780 00:17:52.780 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70400: Wed Nov 20 13:37:51 2024 00:17:52.780 read: IOPS=30.8k, BW=120MiB/s (126MB/s)(603MiB/5002msec) 00:17:52.780 slat (usec): min=2, max=117, avg= 5.70, stdev= 4.48 00:17:52.780 clat (usec): min=1005, max=3663, avg=1847.27, stdev=357.94 00:17:52.780 lat (usec): min=1009, max=3680, avg=1852.97, stdev=359.65 00:17:52.780 clat percentiles (usec): 00:17:52.780 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1549], 00:17:52.780 | 30.00th=[ 1631], 40.00th=[ 1713], 50.00th=[ 1795], 60.00th=[ 1876], 00:17:52.780 | 70.00th=[ 1991], 80.00th=[ 2147], 90.00th=[ 2343], 95.00th=[ 2507], 00:17:52.780 | 99.00th=[ 2835], 99.50th=[ 2966], 99.90th=[ 3392], 99.95th=[ 3490], 00:17:52.780 | 99.99th=[ 3556] 00:17:52.780 bw ( KiB/s): min=117248, max=135680, per=100.00%, avg=126748.78, stdev=5335.39, samples=9 00:17:52.780 iops : min=29312, max=33920, avg=31687.11, stdev=1333.85, samples=9 00:17:52.780 lat (msec) : 2=70.69%, 4=29.31% 00:17:52.780 cpu : usr=37.41%, sys=60.73%, ctx=18, majf=0, minf=762 00:17:52.780 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:52.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.780 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:52.780 issued rwts: total=154240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:52.780 00:17:52.780 Run status group 0 (all jobs): 00:17:52.780 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=603MiB (632MB), run=5002-5002msec 00:17:53.041 ----------------------------------------------------- 00:17:53.041 Suppressions used: 00:17:53.041 count bytes template 00:17:53.041 1 11 /usr/src/fio/parse.c 00:17:53.042 1 8 libtcmalloc_minimal.so 00:17:53.042 1 904 libcrypto.so 00:17:53.042 ----------------------------------------------------- 00:17:53.042 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:53.042 13:37:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.042 { 00:17:53.042 "subsystems": [ 00:17:53.042 { 00:17:53.042 "subsystem": "bdev", 00:17:53.042 "config": [ 00:17:53.042 { 00:17:53.042 "params": { 00:17:53.042 "io_mechanism": "io_uring", 00:17:53.042 "conserve_cpu": false, 00:17:53.042 "filename": "/dev/nvme0n1", 00:17:53.042 "name": "xnvme_bdev" 00:17:53.042 }, 00:17:53.042 "method": "bdev_xnvme_create" 00:17:53.042 }, 00:17:53.042 { 00:17:53.042 "method": "bdev_wait_for_examine" 00:17:53.042 } 00:17:53.042 ] 00:17:53.042 } 00:17:53.042 ] 00:17:53.042 } 00:17:53.307 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:53.307 fio-3.35 00:17:53.307 Starting 1 thread 00:17:59.977 00:17:59.977 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70492: Wed Nov 20 13:37:58 2024 00:17:59.977 write: IOPS=32.4k, BW=126MiB/s (133MB/s)(632MiB/5001msec); 0 zone resets 00:17:59.977 slat (usec): min=2, max=313, avg= 4.93, stdev= 3.34 00:17:59.977 clat (usec): min=101, max=12181, avg=1773.94, stdev=356.28 00:17:59.977 lat (usec): min=105, max=12185, avg=1778.87, stdev=357.02 00:17:59.977 clat percentiles (usec): 00:17:59.977 | 1.00th=[ 1139], 5.00th=[ 1287], 10.00th=[ 1385], 20.00th=[ 1500], 00:17:59.977 | 30.00th=[ 1582], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1811], 00:17:59.977 | 70.00th=[ 1909], 80.00th=[ 2024], 90.00th=[ 2212], 95.00th=[ 2376], 00:17:59.977 | 99.00th=[ 2704], 99.50th=[ 2835], 99.90th=[ 3228], 99.95th=[ 3392], 00:17:59.977 | 99.99th=[10290] 00:17:59.977 bw ( KiB/s): min=119296, max=143872, per=99.97%, avg=129400.89, stdev=7597.63, samples=9 00:17:59.978 iops : min=29824, max=35968, avg=32350.22, stdev=1899.41, samples=9 00:17:59.978 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:17:59.978 lat (msec) : 2=77.67%, 4=22.25%, 10=0.02%, 20=0.01% 00:17:59.978 cpu : usr=34.80%, sys=63.22%, ctx=8, majf=0, minf=763 00:17:59.978 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:59.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.978 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:59.978 issued rwts: total=0,161832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:59.978 00:17:59.978 Run status group 0 (all jobs): 00:17:59.978 WRITE: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=632MiB (663MB), run=5001-5001msec 00:17:59.978 ----------------------------------------------------- 00:17:59.978 Suppressions used: 00:17:59.978 count bytes template 00:17:59.978 1 11 /usr/src/fio/parse.c 00:17:59.978 1 8 libtcmalloc_minimal.so 00:17:59.978 1 904 libcrypto.so 00:17:59.978 ----------------------------------------------------- 00:17:59.978 00:17:59.978 00:17:59.978 real 0m14.007s 00:17:59.978 user 0m6.658s 00:17:59.978 sys 0m6.815s 00:17:59.978 13:37:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.978 ************************************ 00:17:59.978 END TEST xnvme_fio_plugin 00:17:59.978 13:37:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:59.978 ************************************ 00:18:00.237 13:37:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:00.237 13:37:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:00.237 13:37:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:00.237 13:37:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:00.237 13:37:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.237 13:37:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.237 13:37:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:00.237 ************************************ 00:18:00.237 START TEST xnvme_rpc 00:18:00.237 ************************************ 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70582 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70582 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70582 ']' 00:18:00.237 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.238 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.238 13:37:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.238 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.238 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.238 13:37:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.238 [2024-11-20 13:37:59.546802] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:00.238 [2024-11-20 13:37:59.546961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70582 ] 00:18:00.498 [2024-11-20 13:37:59.711727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.498 [2024-11-20 13:37:59.849468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 xnvme_bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:01.442 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70582 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70582 ']' 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70582 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70582 00:18:01.443 killing process with pid 70582 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70582' 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70582 00:18:01.443 13:38:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70582 00:18:03.363 00:18:03.363 real 0m3.065s 00:18:03.363 user 0m3.066s 00:18:03.363 sys 0m0.504s 00:18:03.363 13:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.363 ************************************ 00:18:03.363 END TEST xnvme_rpc 00:18:03.363 ************************************ 00:18:03.363 13:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 13:38:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:03.363 13:38:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:03.363 13:38:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.363 13:38:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 ************************************ 00:18:03.363 START TEST xnvme_bdevperf 00:18:03.363 ************************************ 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:03.363 13:38:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 { 00:18:03.363 "subsystems": [ 00:18:03.363 { 00:18:03.363 "subsystem": "bdev", 00:18:03.363 "config": [ 00:18:03.363 { 00:18:03.363 "params": { 00:18:03.363 "io_mechanism": "io_uring", 00:18:03.363 "conserve_cpu": true, 00:18:03.363 "filename": "/dev/nvme0n1", 00:18:03.363 "name": "xnvme_bdev" 00:18:03.363 }, 00:18:03.363 "method": "bdev_xnvme_create" 00:18:03.363 }, 00:18:03.363 { 00:18:03.363 "method": "bdev_wait_for_examine" 00:18:03.363 } 00:18:03.363 ] 00:18:03.363 } 00:18:03.363 ] 00:18:03.363 } 00:18:03.363 [2024-11-20 13:38:02.655905] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:03.363 [2024-11-20 13:38:02.656080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70656 ] 00:18:03.626 [2024-11-20 13:38:02.824801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.626 [2024-11-20 13:38:02.965682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.888 Running I/O for 5 seconds... 00:18:06.216 30083.00 IOPS, 117.51 MiB/s [2024-11-20T13:38:06.586Z] 30496.00 IOPS, 119.12 MiB/s [2024-11-20T13:38:07.625Z] 30407.67 IOPS, 118.78 MiB/s [2024-11-20T13:38:08.570Z] 30684.00 IOPS, 119.86 MiB/s [2024-11-20T13:38:08.570Z] 30651.60 IOPS, 119.73 MiB/s 00:18:09.143 Latency(us) 00:18:09.143 [2024-11-20T13:38:08.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.143 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:09.143 xnvme_bdev : 5.00 30644.41 119.70 0.00 0.00 2083.74 161.48 18148.43 00:18:09.143 [2024-11-20T13:38:08.570Z] =================================================================================================================== 00:18:09.143 [2024-11-20T13:38:08.570Z] Total : 30644.41 119.70 0.00 0.00 2083.74 161.48 18148.43 00:18:09.715 13:38:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:09.715 13:38:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:09.715 13:38:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:09.715 13:38:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:09.715 13:38:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:09.975 { 00:18:09.975 "subsystems": [ 00:18:09.975 { 00:18:09.975 "subsystem": "bdev", 00:18:09.975 "config": [ 00:18:09.975 { 00:18:09.975 "params": { 00:18:09.975 "io_mechanism": "io_uring", 00:18:09.975 "conserve_cpu": true, 00:18:09.975 "filename": "/dev/nvme0n1", 00:18:09.975 "name": "xnvme_bdev" 00:18:09.975 }, 00:18:09.975 "method": "bdev_xnvme_create" 00:18:09.975 }, 00:18:09.975 { 00:18:09.975 "method": "bdev_wait_for_examine" 00:18:09.975 } 00:18:09.975 ] 00:18:09.975 } 00:18:09.975 ] 00:18:09.975 } 00:18:09.975 [2024-11-20 13:38:09.191231] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:09.975 [2024-11-20 13:38:09.191388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:18:09.975 [2024-11-20 13:38:09.355991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.234 [2024-11-20 13:38:09.498050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.495 Running I/O for 5 seconds... 00:18:12.454 6749.00 IOPS, 26.36 MiB/s [2024-11-20T13:38:13.269Z] 6999.00 IOPS, 27.34 MiB/s [2024-11-20T13:38:14.215Z] 7144.33 IOPS, 27.91 MiB/s [2024-11-20T13:38:15.158Z] 6945.50 IOPS, 27.13 MiB/s [2024-11-20T13:38:15.158Z] 6994.60 IOPS, 27.32 MiB/s 00:18:15.731 Latency(us) 00:18:15.731 [2024-11-20T13:38:15.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.731 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:15.731 xnvme_bdev : 5.01 6993.67 27.32 0.00 0.00 9137.40 79.16 182290.90 00:18:15.731 [2024-11-20T13:38:15.158Z] =================================================================================================================== 00:18:15.731 [2024-11-20T13:38:15.158Z] Total : 6993.67 27.32 0.00 0.00 9137.40 79.16 182290.90 00:18:16.303 00:18:16.303 real 0m13.124s 00:18:16.303 user 0m9.528s 00:18:16.303 sys 0m2.597s 00:18:16.303 13:38:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.303 13:38:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:16.303 ************************************ 00:18:16.303 END TEST xnvme_bdevperf 00:18:16.303 ************************************ 00:18:16.566 13:38:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:16.566 13:38:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.566 13:38:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.566 13:38:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:16.566 ************************************ 00:18:16.566 START TEST xnvme_fio_plugin 00:18:16.566 ************************************ 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:16.566 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:16.567 13:38:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:16.567 { 00:18:16.567 "subsystems": [ 00:18:16.567 { 00:18:16.567 "subsystem": "bdev", 00:18:16.567 "config": [ 00:18:16.567 { 00:18:16.567 "params": { 00:18:16.567 "io_mechanism": "io_uring", 00:18:16.567 "conserve_cpu": true, 00:18:16.567 "filename": "/dev/nvme0n1", 00:18:16.567 "name": "xnvme_bdev" 00:18:16.567 }, 00:18:16.567 "method": "bdev_xnvme_create" 00:18:16.567 }, 00:18:16.567 { 00:18:16.567 "method": "bdev_wait_for_examine" 00:18:16.567 } 00:18:16.567 ] 00:18:16.567 } 00:18:16.567 ] 00:18:16.567 } 00:18:16.567 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:16.567 fio-3.35 00:18:16.567 Starting 1 thread 00:18:23.124 00:18:23.124 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70851: Wed Nov 20 13:38:21 2024 00:18:23.124 read: IOPS=62.5k, BW=244MiB/s (256MB/s)(1221MiB/5005msec) 00:18:23.124 slat (usec): min=2, max=182, avg= 3.66, stdev= 1.98 00:18:23.124 clat (usec): min=85, max=9786, avg=884.87, stdev=241.69 00:18:23.124 lat (usec): min=89, max=9789, avg=888.53, stdev=242.11 00:18:23.124 clat percentiles (usec): 00:18:23.124 | 1.00th=[ 619], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 734], 00:18:23.124 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:18:23.124 | 70.00th=[ 906], 80.00th=[ 996], 90.00th=[ 1123], 95.00th=[ 1270], 00:18:23.124 | 99.00th=[ 1729], 99.50th=[ 2008], 99.90th=[ 2999], 99.95th=[ 3523], 00:18:23.124 | 99.99th=[ 6128] 00:18:23.124 bw ( KiB/s): min=231720, max=275456, per=100.00%, avg=250130.40, stdev=14539.44, samples=10 00:18:23.124 iops : min=57930, max=68864, avg=62532.60, stdev=3634.86, samples=10 00:18:23.124 lat (usec) : 100=0.01%, 250=0.02%, 500=0.16%, 750=24.90%, 1000=55.63% 00:18:23.124 lat (msec) : 2=18.78%, 4=0.47%, 10=0.03% 00:18:23.124 cpu : usr=41.35%, sys=54.40%, ctx=11, majf=0, minf=762 00:18:23.124 IO depths : 1=1.2%, 2=2.6%, 4=5.9%, 8=12.4%, 16=25.2%, 32=51.1%, >=64=1.6% 00:18:23.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.124 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:23.124 issued rwts: total=312674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.124 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:23.124 00:18:23.124 Run status group 0 (all jobs): 00:18:23.124 READ: bw=244MiB/s (256MB/s), 244MiB/s-244MiB/s (256MB/s-256MB/s), io=1221MiB (1281MB), run=5005-5005msec 00:18:23.383 ----------------------------------------------------- 00:18:23.383 Suppressions used: 00:18:23.383 count bytes template 00:18:23.383 1 11 /usr/src/fio/parse.c 00:18:23.383 1 8 libtcmalloc_minimal.so 00:18:23.383 1 904 libcrypto.so 00:18:23.383 ----------------------------------------------------- 00:18:23.383 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:23.383 13:38:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.383 { 00:18:23.383 "subsystems": [ 00:18:23.383 { 00:18:23.383 "subsystem": "bdev", 00:18:23.383 "config": [ 00:18:23.384 { 00:18:23.384 "params": { 00:18:23.384 "io_mechanism": "io_uring", 00:18:23.384 "conserve_cpu": true, 00:18:23.384 "filename": "/dev/nvme0n1", 00:18:23.384 "name": "xnvme_bdev" 00:18:23.384 }, 00:18:23.384 "method": "bdev_xnvme_create" 00:18:23.384 }, 00:18:23.384 { 00:18:23.384 "method": "bdev_wait_for_examine" 00:18:23.384 } 00:18:23.384 ] 00:18:23.384 } 00:18:23.384 ] 00:18:23.384 } 00:18:23.642 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:23.642 fio-3.35 00:18:23.642 Starting 1 thread 00:18:30.242 00:18:30.242 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70943: Wed Nov 20 13:38:28 2024 00:18:30.242 write: IOPS=51.9k, BW=203MiB/s (213MB/s)(1015MiB/5001msec); 0 zone resets 00:18:30.242 slat (usec): min=2, max=853, avg= 4.24, stdev= 6.93 00:18:30.242 clat (usec): min=41, max=171976, avg=1082.13, stdev=2717.14 00:18:30.242 lat (usec): min=44, max=171985, avg=1086.37, stdev=2717.30 00:18:30.242 clat percentiles (usec): 00:18:30.242 | 1.00th=[ 383], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 766], 00:18:30.242 | 30.00th=[ 816], 40.00th=[ 857], 50.00th=[ 906], 60.00th=[ 979], 00:18:30.242 | 70.00th=[ 1057], 80.00th=[ 1188], 90.00th=[ 1516], 95.00th=[ 1926], 00:18:30.242 | 99.00th=[ 3032], 99.50th=[ 3490], 99.90th=[ 5211], 99.95th=[ 7111], 00:18:30.242 | 99.99th=[170918] 00:18:30.242 bw ( KiB/s): min=148504, max=260096, per=100.00%, avg=214012.00, stdev=36701.28, samples=9 00:18:30.242 iops : min=37126, max=65024, avg=53503.00, stdev=9175.32, samples=9 00:18:30.242 lat (usec) : 50=0.01%, 100=0.05%, 250=0.34%, 500=1.35%, 750=15.80% 00:18:30.242 lat (usec) : 1000=45.45% 00:18:30.242 lat (msec) : 2=32.63%, 4=4.12%, 10=0.21%, 20=0.02%, 250=0.02% 00:18:30.242 cpu : usr=44.84%, sys=46.90%, ctx=72, majf=0, minf=763 00:18:30.242 IO depths : 1=1.3%, 2=2.7%, 4=5.4%, 8=11.0%, 16=23.0%, 32=54.5%, >=64=2.1% 00:18:30.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.242 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:18:30.242 issued rwts: total=0,259793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.242 00:18:30.242 Run status group 0 (all jobs): 00:18:30.242 WRITE: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1015MiB (1064MB), run=5001-5001msec 00:18:30.242 ----------------------------------------------------- 00:18:30.242 Suppressions used: 00:18:30.242 count bytes template 00:18:30.242 1 11 /usr/src/fio/parse.c 00:18:30.242 1 8 libtcmalloc_minimal.so 00:18:30.242 1 904 libcrypto.so 00:18:30.242 ----------------------------------------------------- 00:18:30.242 00:18:30.242 00:18:30.242 real 0m13.553s 00:18:30.242 user 0m6.967s 00:18:30.242 sys 0m5.633s 00:18:30.242 13:38:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.242 ************************************ 00:18:30.242 END TEST xnvme_fio_plugin 00:18:30.242 ************************************ 00:18:30.242 13:38:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:30.242 13:38:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:30.242 13:38:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.242 13:38:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.242 13:38:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.242 ************************************ 00:18:30.242 START TEST xnvme_rpc 00:18:30.242 ************************************ 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71029 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71029 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71029 ']' 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.242 13:38:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.242 [2024-11-20 13:38:29.448399] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:30.242 [2024-11-20 13:38:29.448525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71029 ] 00:18:30.242 [2024-11-20 13:38:29.607117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.500 [2024-11-20 13:38:29.708697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 xnvme_bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71029 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71029 ']' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71029 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71029 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.066 killing process with pid 71029 00:18:31.066 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71029' 00:18:31.067 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71029 00:18:31.067 13:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71029 00:18:33.028 00:18:33.028 real 0m2.608s 00:18:33.028 user 0m2.712s 00:18:33.028 sys 0m0.346s 00:18:33.028 13:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.028 13:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.028 ************************************ 00:18:33.028 END TEST xnvme_rpc 00:18:33.028 ************************************ 00:18:33.028 13:38:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:33.028 13:38:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:33.028 13:38:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.028 13:38:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:33.028 ************************************ 00:18:33.028 START TEST xnvme_bdevperf 00:18:33.028 ************************************ 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:33.028 13:38:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:33.028 { 00:18:33.028 "subsystems": [ 00:18:33.028 { 00:18:33.028 "subsystem": "bdev", 00:18:33.028 "config": [ 00:18:33.028 { 00:18:33.028 "params": { 00:18:33.028 "io_mechanism": "io_uring_cmd", 00:18:33.028 "conserve_cpu": false, 00:18:33.028 "filename": "/dev/ng0n1", 00:18:33.028 "name": "xnvme_bdev" 00:18:33.028 }, 00:18:33.028 "method": "bdev_xnvme_create" 00:18:33.028 }, 00:18:33.028 { 00:18:33.028 "method": "bdev_wait_for_examine" 00:18:33.028 } 00:18:33.028 ] 00:18:33.028 } 00:18:33.028 ] 00:18:33.028 } 00:18:33.028 [2024-11-20 13:38:32.079226] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:33.028 [2024-11-20 13:38:32.079327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71092 ] 00:18:33.028 [2024-11-20 13:38:32.234113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.028 [2024-11-20 13:38:32.336659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.285 Running I/O for 5 seconds... 00:18:35.589 61406.00 IOPS, 239.87 MiB/s [2024-11-20T13:38:35.949Z] 62300.50 IOPS, 243.36 MiB/s [2024-11-20T13:38:36.883Z] 62414.67 IOPS, 243.81 MiB/s [2024-11-20T13:38:37.816Z] 62508.00 IOPS, 244.17 MiB/s 00:18:38.389 Latency(us) 00:18:38.389 [2024-11-20T13:38:37.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.389 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:38.389 xnvme_bdev : 5.00 62345.43 243.54 0.00 0.00 1022.33 266.24 8872.57 00:18:38.389 [2024-11-20T13:38:37.816Z] =================================================================================================================== 00:18:38.389 [2024-11-20T13:38:37.816Z] Total : 62345.43 243.54 0.00 0.00 1022.33 266.24 8872.57 00:18:38.953 13:38:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:38.953 13:38:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:38.953 13:38:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:38.953 13:38:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:38.953 13:38:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:38.953 { 00:18:38.953 "subsystems": [ 00:18:38.953 { 00:18:38.953 "subsystem": "bdev", 00:18:38.953 "config": [ 00:18:38.953 { 00:18:38.953 "params": { 00:18:38.953 "io_mechanism": "io_uring_cmd", 00:18:38.953 "conserve_cpu": false, 00:18:38.953 "filename": "/dev/ng0n1", 00:18:38.953 "name": "xnvme_bdev" 00:18:38.953 }, 00:18:38.953 "method": "bdev_xnvme_create" 00:18:38.953 }, 00:18:38.953 { 00:18:38.953 "method": "bdev_wait_for_examine" 00:18:38.953 } 00:18:38.953 ] 00:18:38.953 } 00:18:38.953 ] 00:18:38.953 } 00:18:38.953 [2024-11-20 13:38:38.373569] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:38.953 [2024-11-20 13:38:38.373682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71167 ] 00:18:39.212 [2024-11-20 13:38:38.534764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.212 [2024-11-20 13:38:38.635859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.470 Running I/O for 5 seconds... 00:18:41.774 38691.00 IOPS, 151.14 MiB/s [2024-11-20T13:38:42.134Z] 40720.50 IOPS, 159.06 MiB/s [2024-11-20T13:38:43.070Z] 37222.33 IOPS, 145.40 MiB/s [2024-11-20T13:38:44.003Z] 36805.75 IOPS, 143.77 MiB/s [2024-11-20T13:38:44.003Z] 34475.20 IOPS, 134.67 MiB/s 00:18:44.576 Latency(us) 00:18:44.576 [2024-11-20T13:38:44.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.576 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:44.576 xnvme_bdev : 5.01 34451.97 134.58 0.00 0.00 1852.34 43.72 173418.34 00:18:44.576 [2024-11-20T13:38:44.003Z] =================================================================================================================== 00:18:44.576 [2024-11-20T13:38:44.003Z] Total : 34451.97 134.58 0.00 0.00 1852.34 43.72 173418.34 00:18:45.508 13:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:45.508 13:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:45.508 13:38:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:45.508 13:38:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:45.508 13:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.508 { 00:18:45.508 "subsystems": [ 00:18:45.508 { 00:18:45.508 "subsystem": "bdev", 00:18:45.508 "config": [ 00:18:45.508 { 00:18:45.508 "params": { 00:18:45.508 "io_mechanism": "io_uring_cmd", 00:18:45.508 "conserve_cpu": false, 00:18:45.508 "filename": "/dev/ng0n1", 00:18:45.508 "name": "xnvme_bdev" 00:18:45.508 }, 00:18:45.508 "method": "bdev_xnvme_create" 00:18:45.508 }, 00:18:45.508 { 00:18:45.508 "method": "bdev_wait_for_examine" 00:18:45.508 } 00:18:45.508 ] 00:18:45.508 } 00:18:45.508 ] 00:18:45.508 } 00:18:45.508 [2024-11-20 13:38:44.668600] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:45.508 [2024-11-20 13:38:44.668725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ] 00:18:45.508 [2024-11-20 13:38:44.830823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.508 [2024-11-20 13:38:44.929584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.766 Running I/O for 5 seconds... 00:18:48.161 93760.00 IOPS, 366.25 MiB/s [2024-11-20T13:38:48.522Z] 91776.00 IOPS, 358.50 MiB/s [2024-11-20T13:38:49.460Z] 88170.67 IOPS, 344.42 MiB/s [2024-11-20T13:38:50.404Z] 86512.00 IOPS, 337.94 MiB/s 00:18:50.977 Latency(us) 00:18:50.977 [2024-11-20T13:38:50.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.977 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:50.977 xnvme_bdev : 5.00 85315.26 333.26 0.00 0.00 746.63 456.86 5923.45 00:18:50.977 [2024-11-20T13:38:50.404Z] =================================================================================================================== 00:18:50.977 [2024-11-20T13:38:50.404Z] Total : 85315.26 333.26 0.00 0.00 746.63 456.86 5923.45 00:18:51.548 13:38:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:51.548 13:38:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:51.548 13:38:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:51.548 13:38:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:51.548 13:38:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:51.548 { 00:18:51.548 "subsystems": [ 00:18:51.548 { 00:18:51.548 "subsystem": "bdev", 00:18:51.548 "config": [ 00:18:51.548 { 00:18:51.548 "params": { 00:18:51.548 "io_mechanism": "io_uring_cmd", 00:18:51.548 "conserve_cpu": false, 00:18:51.548 "filename": "/dev/ng0n1", 00:18:51.548 "name": "xnvme_bdev" 00:18:51.548 }, 00:18:51.548 "method": "bdev_xnvme_create" 00:18:51.548 }, 00:18:51.548 { 00:18:51.548 "method": "bdev_wait_for_examine" 00:18:51.548 } 00:18:51.548 ] 00:18:51.548 } 00:18:51.548 ] 00:18:51.548 } 00:18:51.806 [2024-11-20 13:38:50.983365] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:51.806 [2024-11-20 13:38:50.983547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71320 ] 00:18:51.806 [2024-11-20 13:38:51.164510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.065 [2024-11-20 13:38:51.264674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.323 Running I/O for 5 seconds... 00:18:54.189 432.00 IOPS, 1.69 MiB/s [2024-11-20T13:38:54.549Z] 377.00 IOPS, 1.47 MiB/s [2024-11-20T13:38:55.942Z] 393.00 IOPS, 1.54 MiB/s [2024-11-20T13:38:56.874Z] 431.50 IOPS, 1.69 MiB/s [2024-11-20T13:38:56.875Z] 1513.20 IOPS, 5.91 MiB/s 00:18:57.448 Latency(us) 00:18:57.448 [2024-11-20T13:38:56.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.448 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:57.448 xnvme_bdev : 5.06 1507.57 5.89 0.00 0.00 42183.23 84.28 354902.65 00:18:57.448 [2024-11-20T13:38:56.875Z] =================================================================================================================== 00:18:57.448 [2024-11-20T13:38:56.875Z] Total : 1507.57 5.89 0.00 0.00 42183.23 84.28 354902.65 00:18:58.013 00:18:58.013 real 0m25.312s 00:18:58.013 user 0m14.284s 00:18:58.013 sys 0m10.600s 00:18:58.013 13:38:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.013 13:38:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:58.013 ************************************ 00:18:58.013 END TEST xnvme_bdevperf 00:18:58.013 ************************************ 00:18:58.013 13:38:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:58.013 13:38:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.013 13:38:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.013 13:38:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.013 ************************************ 00:18:58.013 START TEST xnvme_fio_plugin 00:18:58.013 ************************************ 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.013 13:38:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:58.013 { 00:18:58.013 "subsystems": [ 00:18:58.013 { 00:18:58.013 "subsystem": "bdev", 00:18:58.013 "config": [ 00:18:58.013 { 00:18:58.013 "params": { 00:18:58.013 "io_mechanism": "io_uring_cmd", 00:18:58.013 "conserve_cpu": false, 00:18:58.013 "filename": "/dev/ng0n1", 00:18:58.013 "name": "xnvme_bdev" 00:18:58.013 }, 00:18:58.013 "method": "bdev_xnvme_create" 00:18:58.013 }, 00:18:58.013 { 00:18:58.013 "method": "bdev_wait_for_examine" 00:18:58.013 } 00:18:58.013 ] 00:18:58.013 } 00:18:58.013 ] 00:18:58.013 } 00:18:58.271 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:58.271 fio-3.35 00:18:58.271 Starting 1 thread 00:19:04.849 00:19:04.849 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71435: Wed Nov 20 13:39:03 2024 00:19:04.849 read: IOPS=47.7k, BW=186MiB/s (195MB/s)(932MiB/5001msec) 00:19:04.849 slat (usec): min=2, max=724, avg= 3.98, stdev= 3.20 00:19:04.849 clat (usec): min=494, max=6439, avg=1182.00, stdev=371.35 00:19:04.849 lat (usec): min=497, max=6445, avg=1185.98, stdev=371.88 00:19:04.849 clat percentiles (usec): 00:19:04.849 | 1.00th=[ 676], 5.00th=[ 725], 10.00th=[ 775], 20.00th=[ 857], 00:19:04.849 | 30.00th=[ 938], 40.00th=[ 1020], 50.00th=[ 1123], 60.00th=[ 1221], 00:19:04.849 | 70.00th=[ 1336], 80.00th=[ 1467], 90.00th=[ 1680], 95.00th=[ 1844], 00:19:04.849 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2868], 99.95th=[ 4080], 00:19:04.849 | 99.99th=[ 6325] 00:19:04.849 bw ( KiB/s): min=161896, max=213904, per=97.24%, avg=185599.11, stdev=18707.21, samples=9 00:19:04.849 iops : min=40468, max=53480, avg=46399.56, stdev=4678.51, samples=9 00:19:04.849 lat (usec) : 500=0.01%, 750=7.08%, 1000=30.37% 00:19:04.849 lat (msec) : 2=59.98%, 4=2.51%, 10=0.05% 00:19:04.849 cpu : usr=38.80%, sys=59.56%, ctx=76, majf=0, minf=762 00:19:04.849 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:04.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.849 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:04.849 issued rwts: total=238624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.849 00:19:04.849 Run status group 0 (all jobs): 00:19:04.849 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=932MiB (977MB), run=5001-5001msec 00:19:04.849 ----------------------------------------------------- 00:19:04.849 Suppressions used: 00:19:04.849 count bytes template 00:19:04.849 1 11 /usr/src/fio/parse.c 00:19:04.849 1 8 libtcmalloc_minimal.so 00:19:04.849 1 904 libcrypto.so 00:19:04.849 ----------------------------------------------------- 00:19:04.849 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:04.849 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:04.850 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:04.850 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:04.850 13:39:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.850 { 00:19:04.850 "subsystems": [ 00:19:04.850 { 00:19:04.850 "subsystem": "bdev", 00:19:04.850 "config": [ 00:19:04.850 { 00:19:04.850 "params": { 00:19:04.850 "io_mechanism": "io_uring_cmd", 00:19:04.850 "conserve_cpu": false, 00:19:04.850 "filename": "/dev/ng0n1", 00:19:04.850 "name": "xnvme_bdev" 00:19:04.850 }, 00:19:04.850 "method": "bdev_xnvme_create" 00:19:04.850 }, 00:19:04.850 { 00:19:04.850 "method": "bdev_wait_for_examine" 00:19:04.850 } 00:19:04.850 ] 00:19:04.850 } 00:19:04.850 ] 00:19:04.850 } 00:19:05.107 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:05.107 fio-3.35 00:19:05.107 Starting 1 thread 00:19:11.675 00:19:11.675 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71521: Wed Nov 20 13:39:09 2024 00:19:11.675 write: IOPS=42.9k, BW=168MiB/s (176MB/s)(840MiB/5008msec); 0 zone resets 00:19:11.675 slat (usec): min=2, max=602, avg= 4.08, stdev= 2.52 00:19:11.675 clat (usec): min=55, max=14410, avg=1344.46, stdev=1274.46 00:19:11.675 lat (usec): min=58, max=14413, avg=1348.54, stdev=1274.60 00:19:11.675 clat percentiles (usec): 00:19:11.675 | 1.00th=[ 486], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 816], 00:19:11.675 | 30.00th=[ 881], 40.00th=[ 955], 50.00th=[ 1045], 60.00th=[ 1156], 00:19:11.675 | 70.00th=[ 1303], 80.00th=[ 1500], 90.00th=[ 1778], 95.00th=[ 2180], 00:19:11.675 | 99.00th=[ 8717], 99.50th=[ 9896], 99.90th=[11600], 99.95th=[12125], 00:19:11.675 | 99.99th=[13042] 00:19:11.675 bw ( KiB/s): min=112320, max=242936, per=100.00%, avg=171991.20, stdev=56124.60, samples=10 00:19:11.675 iops : min=28080, max=60734, avg=42997.80, stdev=14031.15, samples=10 00:19:11.675 lat (usec) : 100=0.03%, 250=0.18%, 500=0.89%, 750=10.34%, 1000=34.24% 00:19:11.675 lat (msec) : 2=48.15%, 4=2.84%, 10=2.86%, 20=0.48% 00:19:11.675 cpu : usr=37.81%, sys=61.03%, ctx=19, majf=0, minf=763 00:19:11.675 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.5%, 16=22.5%, 32=55.7%, >=64=2.4% 00:19:11.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.675 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:19:11.675 issued rwts: total=0,215036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:11.675 00:19:11.675 Run status group 0 (all jobs): 00:19:11.675 WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=840MiB (881MB), run=5008-5008msec 00:19:11.675 ----------------------------------------------------- 00:19:11.675 Suppressions used: 00:19:11.675 count bytes template 00:19:11.675 1 11 /usr/src/fio/parse.c 00:19:11.675 1 8 libtcmalloc_minimal.so 00:19:11.675 1 904 libcrypto.so 00:19:11.675 ----------------------------------------------------- 00:19:11.675 00:19:11.675 00:19:11.675 real 0m13.516s 00:19:11.675 user 0m6.565s 00:19:11.675 sys 0m6.513s 00:19:11.675 13:39:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.675 13:39:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:11.675 ************************************ 00:19:11.675 END TEST xnvme_fio_plugin 00:19:11.675 ************************************ 00:19:11.675 13:39:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:11.675 13:39:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:11.675 13:39:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:11.675 13:39:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:11.675 13:39:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:11.675 13:39:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.675 13:39:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.675 ************************************ 00:19:11.675 START TEST xnvme_rpc 00:19:11.675 ************************************ 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71607 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71607 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71607 ']' 00:19:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.675 13:39:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.675 [2024-11-20 13:39:11.067102] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:11.675 [2024-11-20 13:39:11.067260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71607 ] 00:19:11.936 [2024-11-20 13:39:11.231096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.195 [2024-11-20 13:39:11.362598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.810 xnvme_bdev 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.810 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71607 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71607 ']' 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71607 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71607 00:19:13.094 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.095 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.095 killing process with pid 71607 00:19:13.095 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71607' 00:19:13.095 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71607 00:19:13.095 13:39:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71607 00:19:14.476 00:19:14.476 real 0m2.845s 00:19:14.476 user 0m2.834s 00:19:14.476 sys 0m0.501s 00:19:14.476 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.476 ************************************ 00:19:14.476 END TEST xnvme_rpc 00:19:14.476 ************************************ 00:19:14.476 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.476 13:39:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:14.476 13:39:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:14.476 13:39:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.476 13:39:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.476 ************************************ 00:19:14.476 START TEST xnvme_bdevperf 00:19:14.476 ************************************ 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:14.476 13:39:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:14.735 { 00:19:14.735 "subsystems": [ 00:19:14.735 { 00:19:14.735 "subsystem": "bdev", 00:19:14.735 "config": [ 00:19:14.735 { 00:19:14.735 "params": { 00:19:14.735 "io_mechanism": "io_uring_cmd", 00:19:14.735 "conserve_cpu": true, 00:19:14.735 "filename": "/dev/ng0n1", 00:19:14.735 "name": "xnvme_bdev" 00:19:14.735 }, 00:19:14.735 "method": "bdev_xnvme_create" 00:19:14.735 }, 00:19:14.736 { 00:19:14.736 "method": "bdev_wait_for_examine" 00:19:14.736 } 00:19:14.736 ] 00:19:14.736 } 00:19:14.736 ] 00:19:14.736 } 00:19:14.736 [2024-11-20 13:39:13.947661] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:14.736 [2024-11-20 13:39:13.947785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71676 ] 00:19:14.736 [2024-11-20 13:39:14.107930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.996 [2024-11-20 13:39:14.210667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.255 Running I/O for 5 seconds... 00:19:17.130 46280.00 IOPS, 180.78 MiB/s [2024-11-20T13:39:17.490Z] 51282.00 IOPS, 200.32 MiB/s [2024-11-20T13:39:18.862Z] 55626.00 IOPS, 217.29 MiB/s [2024-11-20T13:39:19.804Z] 57735.50 IOPS, 225.53 MiB/s 00:19:20.377 Latency(us) 00:19:20.377 [2024-11-20T13:39:19.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.377 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:20.377 xnvme_bdev : 5.00 58493.65 228.49 0.00 0.00 1089.99 441.11 11090.71 00:19:20.377 [2024-11-20T13:39:19.804Z] =================================================================================================================== 00:19:20.377 [2024-11-20T13:39:19.804Z] Total : 58493.65 228.49 0.00 0.00 1089.99 441.11 11090.71 00:19:20.940 13:39:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:20.940 13:39:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:20.940 13:39:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:20.940 13:39:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:20.940 13:39:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.940 { 00:19:20.940 "subsystems": [ 00:19:20.940 { 00:19:20.940 "subsystem": "bdev", 00:19:20.940 "config": [ 00:19:20.940 { 00:19:20.941 "params": { 00:19:20.941 "io_mechanism": "io_uring_cmd", 00:19:20.941 "conserve_cpu": true, 00:19:20.941 "filename": "/dev/ng0n1", 00:19:20.941 "name": "xnvme_bdev" 00:19:20.941 }, 00:19:20.941 "method": "bdev_xnvme_create" 00:19:20.941 }, 00:19:20.941 { 00:19:20.941 "method": "bdev_wait_for_examine" 00:19:20.941 } 00:19:20.941 ] 00:19:20.941 } 00:19:20.941 ] 00:19:20.941 } 00:19:20.941 [2024-11-20 13:39:20.265137] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:20.941 [2024-11-20 13:39:20.265250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71750 ] 00:19:21.198 [2024-11-20 13:39:20.421922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.198 [2024-11-20 13:39:20.522643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.458 Running I/O for 5 seconds... 00:19:23.783 55104.00 IOPS, 215.25 MiB/s [2024-11-20T13:39:23.774Z] 56479.50 IOPS, 220.62 MiB/s [2024-11-20T13:39:25.145Z] 55829.00 IOPS, 218.08 MiB/s [2024-11-20T13:39:26.122Z] 55759.75 IOPS, 217.81 MiB/s 00:19:26.695 Latency(us) 00:19:26.695 [2024-11-20T13:39:26.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.695 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:26.695 xnvme_bdev : 5.00 56031.48 218.87 0.00 0.00 1137.19 535.63 3957.37 00:19:26.695 [2024-11-20T13:39:26.122Z] =================================================================================================================== 00:19:26.695 [2024-11-20T13:39:26.122Z] Total : 56031.48 218.87 0.00 0.00 1137.19 535.63 3957.37 00:19:27.261 13:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:27.261 13:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:27.261 13:39:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:27.261 13:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:27.261 13:39:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:27.261 { 00:19:27.261 "subsystems": [ 00:19:27.261 { 00:19:27.261 "subsystem": "bdev", 00:19:27.261 "config": [ 00:19:27.261 { 00:19:27.261 "params": { 00:19:27.261 "io_mechanism": "io_uring_cmd", 00:19:27.261 "conserve_cpu": true, 00:19:27.261 "filename": "/dev/ng0n1", 00:19:27.261 "name": "xnvme_bdev" 00:19:27.261 }, 00:19:27.261 "method": "bdev_xnvme_create" 00:19:27.261 }, 00:19:27.261 { 00:19:27.261 "method": "bdev_wait_for_examine" 00:19:27.261 } 00:19:27.261 ] 00:19:27.261 } 00:19:27.261 ] 00:19:27.261 } 00:19:27.261 [2024-11-20 13:39:26.564307] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:27.261 [2024-11-20 13:39:26.564432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:19:27.520 [2024-11-20 13:39:26.727259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.520 [2024-11-20 13:39:26.828868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.778 Running I/O for 5 seconds... 00:19:30.086 94592.00 IOPS, 369.50 MiB/s [2024-11-20T13:39:30.444Z] 94656.00 IOPS, 369.75 MiB/s [2024-11-20T13:39:31.377Z] 95189.33 IOPS, 371.83 MiB/s [2024-11-20T13:39:32.310Z] 95472.00 IOPS, 372.94 MiB/s 00:19:32.883 Latency(us) 00:19:32.883 [2024-11-20T13:39:32.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.883 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:32.883 xnvme_bdev : 5.00 94753.39 370.13 0.00 0.00 672.04 343.43 2495.41 00:19:32.883 [2024-11-20T13:39:32.310Z] =================================================================================================================== 00:19:32.883 [2024-11-20T13:39:32.310Z] Total : 94753.39 370.13 0.00 0.00 672.04 343.43 2495.41 00:19:33.536 13:39:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:33.536 13:39:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:33.536 13:39:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:33.536 13:39:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:33.536 13:39:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:33.536 { 00:19:33.536 "subsystems": [ 00:19:33.536 { 00:19:33.536 "subsystem": "bdev", 00:19:33.536 "config": [ 00:19:33.536 { 00:19:33.536 "params": { 00:19:33.536 "io_mechanism": "io_uring_cmd", 00:19:33.536 "conserve_cpu": true, 00:19:33.536 "filename": "/dev/ng0n1", 00:19:33.536 "name": "xnvme_bdev" 00:19:33.536 }, 00:19:33.536 "method": "bdev_xnvme_create" 00:19:33.536 }, 00:19:33.536 { 00:19:33.536 "method": "bdev_wait_for_examine" 00:19:33.536 } 00:19:33.536 ] 00:19:33.536 } 00:19:33.536 ] 00:19:33.536 } 00:19:33.536 [2024-11-20 13:39:32.857414] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:33.536 [2024-11-20 13:39:32.857536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71904 ] 00:19:33.798 [2024-11-20 13:39:33.017249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.798 [2024-11-20 13:39:33.158668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.057 Running I/O for 5 seconds... 00:19:36.364 22183.00 IOPS, 86.65 MiB/s [2024-11-20T13:39:36.727Z] 20702.50 IOPS, 80.87 MiB/s [2024-11-20T13:39:37.713Z] 19057.67 IOPS, 74.44 MiB/s [2024-11-20T13:39:38.647Z] 15866.00 IOPS, 61.98 MiB/s [2024-11-20T13:39:38.905Z] 13865.40 IOPS, 54.16 MiB/s 00:19:39.478 Latency(us) 00:19:39.478 [2024-11-20T13:39:38.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.478 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:39.478 xnvme_bdev : 5.29 13106.54 51.20 0.00 0.00 4861.74 57.11 493637.32 00:19:39.478 [2024-11-20T13:39:38.905Z] =================================================================================================================== 00:19:39.478 [2024-11-20T13:39:38.905Z] Total : 13106.54 51.20 0.00 0.00 4861.74 57.11 493637.32 00:19:40.411 00:19:40.411 real 0m25.599s 00:19:40.411 user 0m16.525s 00:19:40.411 sys 0m8.078s 00:19:40.411 13:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.411 ************************************ 00:19:40.411 END TEST xnvme_bdevperf 00:19:40.411 ************************************ 00:19:40.411 13:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 13:39:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:40.411 13:39:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.411 13:39:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.411 13:39:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 ************************************ 00:19:40.411 START TEST xnvme_fio_plugin 00:19:40.411 ************************************ 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:40.411 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:40.412 13:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:40.412 { 00:19:40.412 "subsystems": [ 00:19:40.412 { 00:19:40.412 "subsystem": "bdev", 00:19:40.412 "config": [ 00:19:40.412 { 00:19:40.412 "params": { 00:19:40.412 "io_mechanism": "io_uring_cmd", 00:19:40.412 "conserve_cpu": true, 00:19:40.412 "filename": "/dev/ng0n1", 00:19:40.412 "name": "xnvme_bdev" 00:19:40.412 }, 00:19:40.412 "method": "bdev_xnvme_create" 00:19:40.412 }, 00:19:40.412 { 00:19:40.412 "method": "bdev_wait_for_examine" 00:19:40.412 } 00:19:40.412 ] 00:19:40.412 } 00:19:40.412 ] 00:19:40.412 } 00:19:40.412 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:40.412 fio-3.35 00:19:40.412 Starting 1 thread 00:19:46.972 00:19:46.972 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72017: Wed Nov 20 13:39:45 2024 00:19:46.972 read: IOPS=64.7k, BW=253MiB/s (265MB/s)(1264MiB/5001msec) 00:19:46.972 slat (nsec): min=2140, max=76668, avg=3594.45, stdev=1535.29 00:19:46.972 clat (usec): min=516, max=5237, avg=847.12, stdev=163.45 00:19:46.972 lat (usec): min=518, max=5240, avg=850.71, stdev=163.89 00:19:46.972 clat percentiles (usec): 00:19:46.972 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 717], 00:19:46.972 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 848], 00:19:46.972 | 70.00th=[ 881], 80.00th=[ 963], 90.00th=[ 1074], 95.00th=[ 1156], 00:19:46.972 | 99.00th=[ 1401], 99.50th=[ 1500], 99.90th=[ 1729], 99.95th=[ 1860], 00:19:46.972 | 99.99th=[ 2114] 00:19:46.972 bw ( KiB/s): min=247824, max=270832, per=99.96%, avg=258730.67, stdev=6795.32, samples=9 00:19:46.972 iops : min=61956, max=67708, avg=64682.67, stdev=1698.83, samples=9 00:19:46.972 lat (usec) : 750=30.53%, 1000=53.47% 00:19:46.972 lat (msec) : 2=15.98%, 4=0.02%, 10=0.01% 00:19:46.972 cpu : usr=46.14%, sys=51.54%, ctx=31, majf=0, minf=762 00:19:46.972 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:46.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.972 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:46.972 issued rwts: total=323613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.972 00:19:46.972 Run status group 0 (all jobs): 00:19:46.972 READ: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=1264MiB (1326MB), run=5001-5001msec 00:19:46.972 ----------------------------------------------------- 00:19:46.972 Suppressions used: 00:19:46.972 count bytes template 00:19:46.972 1 11 /usr/src/fio/parse.c 00:19:46.972 1 8 libtcmalloc_minimal.so 00:19:46.972 1 904 libcrypto.so 00:19:46.972 ----------------------------------------------------- 00:19:46.972 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.972 13:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.972 { 00:19:46.972 "subsystems": [ 00:19:46.972 { 00:19:46.972 "subsystem": "bdev", 00:19:46.972 "config": [ 00:19:46.972 { 00:19:46.972 "params": { 00:19:46.972 "io_mechanism": "io_uring_cmd", 00:19:46.972 "conserve_cpu": true, 00:19:46.972 "filename": "/dev/ng0n1", 00:19:46.972 "name": "xnvme_bdev" 00:19:46.972 }, 00:19:46.972 "method": "bdev_xnvme_create" 00:19:46.972 }, 00:19:46.972 { 00:19:46.972 "method": "bdev_wait_for_examine" 00:19:46.972 } 00:19:46.972 ] 00:19:46.972 } 00:19:46.972 ] 00:19:46.972 } 00:19:47.229 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:47.229 fio-3.35 00:19:47.229 Starting 1 thread 00:19:53.779 00:19:53.779 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72112: Wed Nov 20 13:39:52 2024 00:19:53.779 write: IOPS=56.0k, BW=219MiB/s (229MB/s)(1094MiB/5001msec); 0 zone resets 00:19:53.779 slat (usec): min=2, max=384, avg= 4.62, stdev= 3.70 00:19:53.779 clat (usec): min=554, max=5493, avg=964.82, stdev=316.54 00:19:53.779 lat (usec): min=556, max=5499, avg=969.44, stdev=319.11 00:19:53.779 clat percentiles (usec): 00:19:53.779 | 1.00th=[ 644], 5.00th=[ 685], 10.00th=[ 717], 20.00th=[ 766], 00:19:53.779 | 30.00th=[ 799], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[ 922], 00:19:53.779 | 70.00th=[ 988], 80.00th=[ 1090], 90.00th=[ 1270], 95.00th=[ 1598], 00:19:53.779 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2737], 99.95th=[ 2900], 00:19:53.779 | 99.99th=[ 5276] 00:19:53.779 bw ( KiB/s): min=132608, max=248832, per=98.82%, avg=221362.67, stdev=36902.61, samples=9 00:19:53.779 iops : min=33152, max=62208, avg=55341.11, stdev=9225.93, samples=9 00:19:53.779 lat (usec) : 750=17.04%, 1000=54.62% 00:19:53.779 lat (msec) : 2=25.85%, 4=2.48%, 10=0.02% 00:19:53.779 cpu : usr=53.08%, sys=44.52%, ctx=11, majf=0, minf=763 00:19:53.779 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:53.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.779 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:53.779 issued rwts: total=0,280061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:53.779 00:19:53.779 Run status group 0 (all jobs): 00:19:53.779 WRITE: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s), io=1094MiB (1147MB), run=5001-5001msec 00:19:53.779 ----------------------------------------------------- 00:19:53.779 Suppressions used: 00:19:53.779 count bytes template 00:19:53.779 1 11 /usr/src/fio/parse.c 00:19:53.779 1 8 libtcmalloc_minimal.so 00:19:53.779 1 904 libcrypto.so 00:19:53.779 ----------------------------------------------------- 00:19:53.779 00:19:53.779 00:19:53.779 real 0m13.608s 00:19:53.779 user 0m7.742s 00:19:53.779 sys 0m5.289s 00:19:53.779 13:39:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.779 13:39:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 ************************************ 00:19:53.779 END TEST xnvme_fio_plugin 00:19:53.779 ************************************ 00:19:53.779 13:39:53 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71607 00:19:53.779 13:39:53 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71607 ']' 00:19:53.779 13:39:53 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71607 00:19:53.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71607) - No such process 00:19:53.779 Process with pid 71607 is not found 00:19:53.779 13:39:53 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71607 is not found' 00:19:53.779 00:19:53.779 real 3m29.382s 00:19:53.779 user 1m56.618s 00:19:53.779 sys 1m17.485s 00:19:53.779 13:39:53 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.779 13:39:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 ************************************ 00:19:53.779 END TEST nvme_xnvme 00:19:53.779 ************************************ 00:19:53.779 13:39:53 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:53.779 13:39:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.779 13:39:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.779 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 ************************************ 00:19:53.779 START TEST blockdev_xnvme 00:19:53.779 ************************************ 00:19:53.779 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:54.038 * Looking for test storage... 00:19:54.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.038 13:39:53 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.038 --rc genhtml_branch_coverage=1 00:19:54.038 --rc genhtml_function_coverage=1 00:19:54.038 --rc genhtml_legend=1 00:19:54.038 --rc geninfo_all_blocks=1 00:19:54.038 --rc geninfo_unexecuted_blocks=1 00:19:54.038 00:19:54.038 ' 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.038 --rc genhtml_branch_coverage=1 00:19:54.038 --rc genhtml_function_coverage=1 00:19:54.038 --rc genhtml_legend=1 00:19:54.038 --rc geninfo_all_blocks=1 00:19:54.038 --rc geninfo_unexecuted_blocks=1 00:19:54.038 00:19:54.038 ' 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.038 --rc genhtml_branch_coverage=1 00:19:54.038 --rc genhtml_function_coverage=1 00:19:54.038 --rc genhtml_legend=1 00:19:54.038 --rc geninfo_all_blocks=1 00:19:54.038 --rc geninfo_unexecuted_blocks=1 00:19:54.038 00:19:54.038 ' 00:19:54.038 13:39:53 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.038 --rc genhtml_branch_coverage=1 00:19:54.038 --rc genhtml_function_coverage=1 00:19:54.038 --rc genhtml_legend=1 00:19:54.038 --rc geninfo_all_blocks=1 00:19:54.038 --rc geninfo_unexecuted_blocks=1 00:19:54.038 00:19:54.038 ' 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:54.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:54.038 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72242 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72242 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72242 ']' 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.039 13:39:53 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.039 13:39:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.039 [2024-11-20 13:39:53.406844] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:54.039 [2024-11-20 13:39:53.406960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72242 ] 00:19:54.297 [2024-11-20 13:39:53.563868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.297 [2024-11-20 13:39:53.663624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.864 13:39:54 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.864 13:39:54 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:54.864 13:39:54 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:54.864 13:39:54 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:54.864 13:39:54 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:54.864 13:39:54 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:54.864 13:39:54 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:55.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.997 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:55.997 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:55.997 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:55.997 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:55.997 nvme0n1 00:19:55.997 nvme0n2 00:19:55.997 nvme0n3 00:19:55.997 nvme1n1 00:19:55.997 nvme2n1 00:19:55.997 nvme3n1 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.997 13:39:55 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:55.997 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:55.998 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "999e4380-70bf-46d2-ac63-a93f15cdfe5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "999e4380-70bf-46d2-ac63-a93f15cdfe5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "387bf63d-001b-4239-bf1c-75e3818191e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "387bf63d-001b-4239-bf1c-75e3818191e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6caf515d-c31a-417a-ba47-1ad309d4e3ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6caf515d-c31a-417a-ba47-1ad309d4e3ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d059ef35-8ff3-4893-b3aa-653d1fcbaa55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d059ef35-8ff3-4893-b3aa-653d1fcbaa55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "78a53458-9fa9-498b-9421-5c816f5b65e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "78a53458-9fa9-498b-9421-5c816f5b65e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2cd7f2d7-21b4-4476-8cd0-fb57845609c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2cd7f2d7-21b4-4476-8cd0-fb57845609c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:55.998 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:55.998 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:55.998 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:55.998 13:39:55 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72242 00:19:55.998 13:39:55 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72242 ']' 00:19:55.998 13:39:55 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72242 00:19:55.998 13:39:55 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:55.998 13:39:55 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.998 13:39:55 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72242 00:19:56.255 killing process with pid 72242 00:19:56.255 13:39:55 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.255 13:39:55 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.255 13:39:55 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72242' 00:19:56.255 13:39:55 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72242 00:19:56.255 13:39:55 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72242 00:19:57.627 13:39:56 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:57.627 13:39:56 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:57.627 13:39:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:57.627 13:39:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.627 13:39:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.627 ************************************ 00:19:57.627 START TEST bdev_hello_world 00:19:57.627 ************************************ 00:19:57.627 13:39:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:57.627 [2024-11-20 13:39:56.993188] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:57.627 [2024-11-20 13:39:56.993277] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72525 ] 00:19:57.926 [2024-11-20 13:39:57.148056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.926 [2024-11-20 13:39:57.246887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.209 [2024-11-20 13:39:57.580119] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:58.209 [2024-11-20 13:39:57.580319] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:58.209 [2024-11-20 13:39:57.580339] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:58.209 [2024-11-20 13:39:57.582242] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:58.209 [2024-11-20 13:39:57.582428] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:58.209 [2024-11-20 13:39:57.582447] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:58.209 [2024-11-20 13:39:57.582659] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:58.209 00:19:58.209 [2024-11-20 13:39:57.582679] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:59.141 00:19:59.141 real 0m1.352s 00:19:59.141 user 0m1.088s 00:19:59.141 sys 0m0.150s 00:19:59.141 ************************************ 00:19:59.141 END TEST bdev_hello_world 00:19:59.141 ************************************ 00:19:59.141 13:39:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.141 13:39:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:59.141 13:39:58 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:59.141 13:39:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.141 13:39:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.141 13:39:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.141 ************************************ 00:19:59.141 START TEST bdev_bounds 00:19:59.141 ************************************ 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72557 00:19:59.141 Process bdevio pid: 72557 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72557' 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72557 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72557 ']' 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.141 13:39:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:59.141 [2024-11-20 13:39:58.394654] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:59.141 [2024-11-20 13:39:58.394955] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72557 ] 00:19:59.141 [2024-11-20 13:39:58.556211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.399 [2024-11-20 13:39:58.659399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.399 [2024-11-20 13:39:58.660089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.399 [2024-11-20 13:39:58.660103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.964 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.964 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:59.964 13:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:59.964 I/O targets: 00:19:59.964 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:59.964 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:59.964 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:59.964 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:59.964 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:59.964 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:59.964 00:19:59.964 00:19:59.964 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.964 http://cunit.sourceforge.net/ 00:19:59.964 00:19:59.964 00:19:59.964 Suite: bdevio tests on: nvme3n1 00:19:59.964 Test: blockdev write read block ...passed 00:19:59.964 Test: blockdev write zeroes read block ...passed 00:19:59.964 Test: blockdev write zeroes read no split ...passed 00:19:59.964 Test: blockdev write zeroes read split ...passed 00:19:59.964 Test: blockdev write zeroes read split partial ...passed 00:19:59.964 Test: blockdev reset ...passed 00:19:59.964 Test: blockdev write read 8 blocks ...passed 00:19:59.964 Test: blockdev write read size > 128k ...passed 00:19:59.964 Test: blockdev write read invalid size ...passed 00:19:59.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.964 Test: blockdev write read max offset ...passed 00:19:59.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.964 Test: blockdev writev readv 8 blocks ...passed 00:19:59.964 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.964 Test: blockdev writev readv block ...passed 00:19:59.964 Test: blockdev writev readv size > 128k ...passed 00:19:59.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.964 Test: blockdev comparev and writev ...passed 00:19:59.964 Test: blockdev nvme passthru rw ...passed 00:19:59.964 Test: blockdev nvme passthru vendor specific ...passed 00:19:59.964 Test: blockdev nvme admin passthru ...passed 00:19:59.964 Test: blockdev copy ...passed 00:19:59.964 Suite: bdevio tests on: nvme2n1 00:19:59.964 Test: blockdev write read block ...passed 00:19:59.964 Test: blockdev write zeroes read block ...passed 00:20:00.221 Test: blockdev write zeroes read no split ...passed 00:20:00.221 Test: blockdev write zeroes read split ...passed 00:20:00.221 Test: blockdev write zeroes read split partial ...passed 00:20:00.221 Test: blockdev reset ...passed 00:20:00.221 Test: blockdev write read 8 blocks ...passed 00:20:00.221 Test: blockdev write read size > 128k ...passed 00:20:00.221 Test: blockdev write read invalid size ...passed 00:20:00.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:00.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:00.221 Test: blockdev write read max offset ...passed 00:20:00.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:00.221 Test: blockdev writev readv 8 blocks ...passed 00:20:00.221 Test: blockdev writev readv 30 x 1block ...passed 00:20:00.221 Test: blockdev writev readv block ...passed 00:20:00.221 Test: blockdev writev readv size > 128k ...passed 00:20:00.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:00.221 Test: blockdev comparev and writev ...passed 00:20:00.221 Test: blockdev nvme passthru rw ...passed 00:20:00.221 Test: blockdev nvme passthru vendor specific ...passed 00:20:00.221 Test: blockdev nvme admin passthru ...passed 00:20:00.221 Test: blockdev copy ...passed 00:20:00.221 Suite: bdevio tests on: nvme1n1 00:20:00.221 Test: blockdev write read block ...passed 00:20:00.221 Test: blockdev write zeroes read block ...passed 00:20:00.221 Test: blockdev write zeroes read no split ...passed 00:20:00.221 Test: blockdev write zeroes read split ...passed 00:20:00.221 Test: blockdev write zeroes read split partial ...passed 00:20:00.221 Test: blockdev reset ...passed 00:20:00.221 Test: blockdev write read 8 blocks ...passed 00:20:00.221 Test: blockdev write read size > 128k ...passed 00:20:00.221 Test: blockdev write read invalid size ...passed 00:20:00.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:00.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:00.221 Test: blockdev write read max offset ...passed 00:20:00.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:00.221 Test: blockdev writev readv 8 blocks ...passed 00:20:00.221 Test: blockdev writev readv 30 x 1block ...passed 00:20:00.221 Test: blockdev writev readv block ...passed 00:20:00.222 Test: blockdev writev readv size > 128k ...passed 00:20:00.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:00.222 Test: blockdev comparev and writev ...passed 00:20:00.222 Test: blockdev nvme passthru rw ...passed 00:20:00.222 Test: blockdev nvme passthru vendor specific ...passed 00:20:00.222 Test: blockdev nvme admin passthru ...passed 00:20:00.222 Test: blockdev copy ...passed 00:20:00.222 Suite: bdevio tests on: nvme0n3 00:20:00.222 Test: blockdev write read block ...passed 00:20:00.222 Test: blockdev write zeroes read block ...passed 00:20:00.222 Test: blockdev write zeroes read no split ...passed 00:20:00.222 Test: blockdev write zeroes read split ...passed 00:20:00.222 Test: blockdev write zeroes read split partial ...passed 00:20:00.222 Test: blockdev reset ...passed 00:20:00.222 Test: blockdev write read 8 blocks ...passed 00:20:00.222 Test: blockdev write read size > 128k ...passed 00:20:00.222 Test: blockdev write read invalid size ...passed 00:20:00.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:00.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:00.222 Test: blockdev write read max offset ...passed 00:20:00.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:00.222 Test: blockdev writev readv 8 blocks ...passed 00:20:00.222 Test: blockdev writev readv 30 x 1block ...passed 00:20:00.222 Test: blockdev writev readv block ...passed 00:20:00.222 Test: blockdev writev readv size > 128k ...passed 00:20:00.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:00.222 Test: blockdev comparev and writev ...passed 00:20:00.222 Test: blockdev nvme passthru rw ...passed 00:20:00.222 Test: blockdev nvme passthru vendor specific ...passed 00:20:00.222 Test: blockdev nvme admin passthru ...passed 00:20:00.222 Test: blockdev copy ...passed 00:20:00.222 Suite: bdevio tests on: nvme0n2 00:20:00.222 Test: blockdev write read block ...passed 00:20:00.222 Test: blockdev write zeroes read block ...passed 00:20:00.222 Test: blockdev write zeroes read no split ...passed 00:20:00.222 Test: blockdev write zeroes read split ...passed 00:20:00.222 Test: blockdev write zeroes read split partial ...passed 00:20:00.222 Test: blockdev reset ...passed 00:20:00.222 Test: blockdev write read 8 blocks ...passed 00:20:00.222 Test: blockdev write read size > 128k ...passed 00:20:00.222 Test: blockdev write read invalid size ...passed 00:20:00.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:00.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:00.222 Test: blockdev write read max offset ...passed 00:20:00.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:00.222 Test: blockdev writev readv 8 blocks ...passed 00:20:00.222 Test: blockdev writev readv 30 x 1block ...passed 00:20:00.222 Test: blockdev writev readv block ...passed 00:20:00.222 Test: blockdev writev readv size > 128k ...passed 00:20:00.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:00.222 Test: blockdev comparev and writev ...passed 00:20:00.222 Test: blockdev nvme passthru rw ...passed 00:20:00.222 Test: blockdev nvme passthru vendor specific ...passed 00:20:00.222 Test: blockdev nvme admin passthru ...passed 00:20:00.222 Test: blockdev copy ...passed 00:20:00.222 Suite: bdevio tests on: nvme0n1 00:20:00.222 Test: blockdev write read block ...passed 00:20:00.222 Test: blockdev write zeroes read block ...passed 00:20:00.222 Test: blockdev write zeroes read no split ...passed 00:20:00.222 Test: blockdev write zeroes read split ...passed 00:20:00.222 Test: blockdev write zeroes read split partial ...passed 00:20:00.222 Test: blockdev reset ...passed 00:20:00.222 Test: blockdev write read 8 blocks ...passed 00:20:00.222 Test: blockdev write read size > 128k ...passed 00:20:00.222 Test: blockdev write read invalid size ...passed 00:20:00.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:00.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:00.222 Test: blockdev write read max offset ...passed 00:20:00.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:00.222 Test: blockdev writev readv 8 blocks ...passed 00:20:00.222 Test: blockdev writev readv 30 x 1block ...passed 00:20:00.222 Test: blockdev writev readv block ...passed 00:20:00.222 Test: blockdev writev readv size > 128k ...passed 00:20:00.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:00.222 Test: blockdev comparev and writev ...passed 00:20:00.222 Test: blockdev nvme passthru rw ...passed 00:20:00.222 Test: blockdev nvme passthru vendor specific ...passed 00:20:00.222 Test: blockdev nvme admin passthru ...passed 00:20:00.222 Test: blockdev copy ...passed 00:20:00.222 00:20:00.222 Run Summary: Type Total Ran Passed Failed Inactive 00:20:00.222 suites 6 6 n/a 0 0 00:20:00.222 tests 138 138 138 0 0 00:20:00.222 asserts 780 780 780 0 n/a 00:20:00.222 00:20:00.222 Elapsed time = 0.864 seconds 00:20:00.222 0 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72557 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72557 ']' 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72557 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72557 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72557' 00:20:00.480 killing process with pid 72557 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72557 00:20:00.480 13:39:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72557 00:20:01.046 13:40:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:01.046 ************************************ 00:20:01.046 END TEST bdev_bounds 00:20:01.046 ************************************ 00:20:01.046 00:20:01.046 real 0m2.078s 00:20:01.046 user 0m5.199s 00:20:01.046 sys 0m0.282s 00:20:01.046 13:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.046 13:40:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:01.046 13:40:00 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:01.046 13:40:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:01.046 13:40:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.046 13:40:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.046 ************************************ 00:20:01.046 START TEST bdev_nbd 00:20:01.046 ************************************ 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72611 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72611 /var/tmp/spdk-nbd.sock 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72611 ']' 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:01.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.046 13:40:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:01.304 [2024-11-20 13:40:00.510491] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:01.304 [2024-11-20 13:40:00.511098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.304 [2024-11-20 13:40:00.671544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.562 [2024-11-20 13:40:00.771214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:02.128 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.386 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.387 1+0 records in 00:20:02.387 1+0 records out 00:20:02.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046053 s, 8.9 MB/s 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:02.387 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.644 1+0 records in 00:20:02.644 1+0 records out 00:20:02.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341695 s, 12.0 MB/s 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:02.644 13:40:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.644 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.902 1+0 records in 00:20:02.902 1+0 records out 00:20:02.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525073 s, 7.8 MB/s 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.902 1+0 records in 00:20:02.902 1+0 records out 00:20:02.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625724 s, 6.5 MB/s 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:02.902 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.159 1+0 records in 00:20:03.159 1+0 records out 00:20:03.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038255 s, 10.7 MB/s 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:03.159 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.416 1+0 records in 00:20:03.416 1+0 records out 00:20:03.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559204 s, 7.3 MB/s 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:03.416 13:40:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd0", 00:20:03.674 "bdev_name": "nvme0n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd1", 00:20:03.674 "bdev_name": "nvme0n2" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd2", 00:20:03.674 "bdev_name": "nvme0n3" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd3", 00:20:03.674 "bdev_name": "nvme1n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd4", 00:20:03.674 "bdev_name": "nvme2n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd5", 00:20:03.674 "bdev_name": "nvme3n1" 00:20:03.674 } 00:20:03.674 ]' 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd0", 00:20:03.674 "bdev_name": "nvme0n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd1", 00:20:03.674 "bdev_name": "nvme0n2" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd2", 00:20:03.674 "bdev_name": "nvme0n3" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd3", 00:20:03.674 "bdev_name": "nvme1n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd4", 00:20:03.674 "bdev_name": "nvme2n1" 00:20:03.674 }, 00:20:03.674 { 00:20:03.674 "nbd_device": "/dev/nbd5", 00:20:03.674 "bdev_name": "nvme3n1" 00:20:03.674 } 00:20:03.674 ]' 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.674 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.932 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:04.190 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.191 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.501 13:40:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.759 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.017 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:05.275 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:05.606 /dev/nbd0 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.606 1+0 records in 00:20:05.606 1+0 records out 00:20:05.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466509 s, 8.8 MB/s 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:05.606 /dev/nbd1 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.606 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:05.607 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:05.607 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.607 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.607 13:40:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.607 1+0 records in 00:20:05.607 1+0 records out 00:20:05.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292485 s, 14.0 MB/s 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:05.607 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:05.865 /dev/nbd10 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.865 1+0 records in 00:20:05.865 1+0 records out 00:20:05.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543218 s, 7.5 MB/s 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:05.865 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:06.123 /dev/nbd11 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.123 1+0 records in 00:20:06.123 1+0 records out 00:20:06.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435927 s, 9.4 MB/s 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:06.123 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:06.381 /dev/nbd12 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.381 1+0 records in 00:20:06.381 1+0 records out 00:20:06.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052755 s, 7.8 MB/s 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:06.381 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:06.639 /dev/nbd13 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.639 1+0 records in 00:20:06.639 1+0 records out 00:20:06.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460927 s, 8.9 MB/s 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:06.639 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:06.640 13:40:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:06.897 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd0", 00:20:06.897 "bdev_name": "nvme0n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd1", 00:20:06.897 "bdev_name": "nvme0n2" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd10", 00:20:06.897 "bdev_name": "nvme0n3" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd11", 00:20:06.897 "bdev_name": "nvme1n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd12", 00:20:06.897 "bdev_name": "nvme2n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd13", 00:20:06.897 "bdev_name": "nvme3n1" 00:20:06.897 } 00:20:06.897 ]' 00:20:06.897 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd0", 00:20:06.897 "bdev_name": "nvme0n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd1", 00:20:06.897 "bdev_name": "nvme0n2" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd10", 00:20:06.897 "bdev_name": "nvme0n3" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd11", 00:20:06.897 "bdev_name": "nvme1n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd12", 00:20:06.897 "bdev_name": "nvme2n1" 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "nbd_device": "/dev/nbd13", 00:20:06.897 "bdev_name": "nvme3n1" 00:20:06.898 } 00:20:06.898 ]' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:06.898 /dev/nbd1 00:20:06.898 /dev/nbd10 00:20:06.898 /dev/nbd11 00:20:06.898 /dev/nbd12 00:20:06.898 /dev/nbd13' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:06.898 /dev/nbd1 00:20:06.898 /dev/nbd10 00:20:06.898 /dev/nbd11 00:20:06.898 /dev/nbd12 00:20:06.898 /dev/nbd13' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:06.898 256+0 records in 00:20:06.898 256+0 records out 00:20:06.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743435 s, 141 MB/s 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:06.898 256+0 records in 00:20:06.898 256+0 records out 00:20:06.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0608583 s, 17.2 MB/s 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:06.898 256+0 records in 00:20:06.898 256+0 records out 00:20:06.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0578046 s, 18.1 MB/s 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:06.898 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:07.155 256+0 records in 00:20:07.155 256+0 records out 00:20:07.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0543328 s, 19.3 MB/s 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:07.155 256+0 records in 00:20:07.155 256+0 records out 00:20:07.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0615781 s, 17.0 MB/s 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:07.155 256+0 records in 00:20:07.155 256+0 records out 00:20:07.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.059953 s, 17.5 MB/s 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:07.155 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:07.412 256+0 records in 00:20:07.413 256+0 records out 00:20:07.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0714279 s, 14.7 MB/s 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.413 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:07.671 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.671 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.671 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.671 13:40:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.671 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.929 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:08.187 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:08.446 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:08.704 13:40:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:08.704 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:08.704 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:08.704 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:08.962 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:08.963 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:08.963 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:08.963 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:08.963 malloc_lvol_verify 00:20:08.963 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:09.221 4f9dcab0-9a6b-49d2-ac65-7586dce419eb 00:20:09.221 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:09.479 8db90546-8f23-4ff4-92f6-ee83174c5932 00:20:09.479 13:40:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:09.738 /dev/nbd0 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:09.738 mke2fs 1.47.0 (5-Feb-2023) 00:20:09.738 Discarding device blocks: 0/4096 done 00:20:09.738 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:09.738 00:20:09.738 Allocating group tables: 0/1 done 00:20:09.738 Writing inode tables: 0/1 done 00:20:09.738 Creating journal (1024 blocks): done 00:20:09.738 Writing superblocks and filesystem accounting information: 0/1 done 00:20:09.738 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:09.738 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72611 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72611 ']' 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72611 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72611 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.995 killing process with pid 72611 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72611' 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72611 00:20:09.995 13:40:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72611 00:20:10.929 13:40:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:10.929 00:20:10.929 real 0m9.609s 00:20:10.929 user 0m13.834s 00:20:10.929 sys 0m3.115s 00:20:10.929 13:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.929 13:40:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 ************************************ 00:20:10.929 END TEST bdev_nbd 00:20:10.929 ************************************ 00:20:10.929 13:40:10 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:10.929 13:40:10 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:20:10.929 13:40:10 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:20:10.929 13:40:10 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:10.929 13:40:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:10.929 13:40:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.929 13:40:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 ************************************ 00:20:10.929 START TEST bdev_fio 00:20:10.929 ************************************ 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:10.929 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 ************************************ 00:20:10.929 START TEST bdev_fio_rw_verify 00:20:10.929 ************************************ 00:20:10.929 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:10.930 13:40:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:10.930 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:10.930 fio-3.35 00:20:10.930 Starting 6 threads 00:20:23.300 00:20:23.300 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=73013: Wed Nov 20 13:40:22 2024 00:20:23.300 read: IOPS=43.8k, BW=171MiB/s (180MB/s)(1713MiB/10001msec) 00:20:23.300 slat (usec): min=2, max=3232, avg= 4.71, stdev= 6.00 00:20:23.300 clat (usec): min=80, max=66170, avg=363.51, stdev=217.21 00:20:23.300 lat (usec): min=85, max=66173, avg=368.22, stdev=217.65 00:20:23.300 clat percentiles (usec): 00:20:23.300 | 50.000th=[ 334], 99.000th=[ 955], 99.900th=[ 1516], 99.990th=[ 3687], 00:20:23.300 | 99.999th=[ 4015] 00:20:23.300 write: IOPS=44.1k, BW=172MiB/s (181MB/s)(1724MiB/10001msec); 0 zone resets 00:20:23.300 slat (usec): min=3, max=4887, avg=23.70, stdev=36.09 00:20:23.300 clat (usec): min=47, max=72455, avg=505.09, stdev=470.07 00:20:23.300 lat (usec): min=65, max=72471, avg=528.79, stdev=472.60 00:20:23.300 clat percentiles (usec): 00:20:23.300 | 50.000th=[ 474], 99.000th=[ 1172], 99.900th=[ 1696], 99.990th=[ 9503], 00:20:23.300 | 99.999th=[71828] 00:20:23.300 bw ( KiB/s): min=155720, max=196743, per=100.00%, avg=176644.53, stdev=1883.29, samples=114 00:20:23.301 iops : min=38930, max=49185, avg=44160.63, stdev=470.80, samples=114 00:20:23.301 lat (usec) : 50=0.01%, 100=0.18%, 250=19.91%, 500=48.17%, 750=23.74% 00:20:23.301 lat (usec) : 1000=6.24% 00:20:23.301 lat (msec) : 2=1.71%, 4=0.05%, 10=0.01%, 20=0.01%, 100=0.01% 00:20:23.301 cpu : usr=48.96%, sys=32.67%, ctx=10733, majf=0, minf=34861 00:20:23.301 IO depths : 1=11.4%, 2=23.6%, 4=51.3%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.301 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.301 issued rwts: total=438422,441383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:23.301 00:20:23.301 Run status group 0 (all jobs): 00:20:23.301 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=1713MiB (1796MB), run=10001-10001msec 00:20:23.301 WRITE: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=1724MiB (1808MB), run=10001-10001msec 00:20:23.866 ----------------------------------------------------- 00:20:23.866 Suppressions used: 00:20:23.866 count bytes template 00:20:23.866 6 48 /usr/src/fio/parse.c 00:20:23.866 2664 255744 /usr/src/fio/iolog.c 00:20:23.866 1 8 libtcmalloc_minimal.so 00:20:23.866 1 904 libcrypto.so 00:20:23.866 ----------------------------------------------------- 00:20:23.866 00:20:23.866 00:20:23.866 real 0m12.944s 00:20:23.866 user 0m30.818s 00:20:23.866 sys 0m19.880s 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.866 ************************************ 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:23.866 END TEST bdev_fio_rw_verify 00:20:23.866 ************************************ 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:23.866 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "999e4380-70bf-46d2-ac63-a93f15cdfe5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "999e4380-70bf-46d2-ac63-a93f15cdfe5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "387bf63d-001b-4239-bf1c-75e3818191e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "387bf63d-001b-4239-bf1c-75e3818191e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6caf515d-c31a-417a-ba47-1ad309d4e3ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6caf515d-c31a-417a-ba47-1ad309d4e3ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d059ef35-8ff3-4893-b3aa-653d1fcbaa55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d059ef35-8ff3-4893-b3aa-653d1fcbaa55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "78a53458-9fa9-498b-9421-5c816f5b65e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "78a53458-9fa9-498b-9421-5c816f5b65e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2cd7f2d7-21b4-4476-8cd0-fb57845609c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2cd7f2d7-21b4-4476-8cd0-fb57845609c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:23.867 /home/vagrant/spdk_repo/spdk 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:23.867 00:20:23.867 real 0m13.072s 00:20:23.867 user 0m30.897s 00:20:23.867 sys 0m19.933s 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.867 13:40:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:23.867 ************************************ 00:20:23.867 END TEST bdev_fio 00:20:23.867 ************************************ 00:20:23.867 13:40:23 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:23.867 13:40:23 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:23.867 13:40:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:23.867 13:40:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.867 13:40:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:23.867 ************************************ 00:20:23.867 START TEST bdev_verify 00:20:23.867 ************************************ 00:20:23.867 13:40:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:23.867 [2024-11-20 13:40:23.275541] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:23.867 [2024-11-20 13:40:23.275657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 00:20:24.125 [2024-11-20 13:40:23.433214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:24.125 [2024-11-20 13:40:23.535246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.125 [2024-11-20 13:40:23.535376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.692 Running I/O for 5 seconds... 00:20:27.033 26560.00 IOPS, 103.75 MiB/s [2024-11-20T13:40:27.392Z] 25968.00 IOPS, 101.44 MiB/s [2024-11-20T13:40:28.348Z] 24949.33 IOPS, 97.46 MiB/s [2024-11-20T13:40:29.293Z] 24464.00 IOPS, 95.56 MiB/s [2024-11-20T13:40:29.293Z] 24396.80 IOPS, 95.30 MiB/s 00:20:29.866 Latency(us) 00:20:29.866 [2024-11-20T13:40:29.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.866 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x0 length 0x80000 00:20:29.866 nvme0n1 : 5.05 1773.99 6.93 0.00 0.00 72013.45 13409.67 73400.32 00:20:29.866 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x80000 length 0x80000 00:20:29.866 nvme0n1 : 5.05 1722.18 6.73 0.00 0.00 74181.67 5041.23 67350.84 00:20:29.866 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x0 length 0x80000 00:20:29.866 nvme0n2 : 5.05 1773.48 6.93 0.00 0.00 71870.62 17039.36 62914.56 00:20:29.866 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x80000 length 0x80000 00:20:29.866 nvme0n2 : 5.03 1706.14 6.66 0.00 0.00 74708.91 13308.85 67754.14 00:20:29.866 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x0 length 0x80000 00:20:29.866 nvme0n3 : 5.05 1772.93 6.93 0.00 0.00 71735.37 14720.39 62511.26 00:20:29.866 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x80000 length 0x80000 00:20:29.866 nvme0n3 : 5.08 1714.13 6.70 0.00 0.00 74220.79 16837.71 58478.28 00:20:29.866 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.866 Verification LBA range: start 0x0 length 0x20000 00:20:29.866 nvme1n1 : 5.08 1788.05 6.98 0.00 0.00 70965.75 11494.01 69367.34 00:20:29.867 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.867 Verification LBA range: start 0x20000 length 0x20000 00:20:29.867 nvme1n1 : 5.07 1717.29 6.71 0.00 0.00 73928.05 15123.69 69367.34 00:20:29.867 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.867 Verification LBA range: start 0x0 length 0xa0000 00:20:29.867 nvme2n1 : 5.09 1786.55 6.98 0.00 0.00 70878.84 9578.34 66947.54 00:20:29.867 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.867 Verification LBA range: start 0xa0000 length 0xa0000 00:20:29.867 nvme2n1 : 5.08 1713.61 6.69 0.00 0.00 73926.17 7057.72 66140.95 00:20:29.867 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:29.867 Verification LBA range: start 0x0 length 0xbd0bd 00:20:29.867 nvme3n1 : 5.08 3396.16 13.27 0.00 0.00 37153.33 3175.98 63317.86 00:20:29.867 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.867 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:29.867 nvme3n1 : 5.08 3205.26 12.52 0.00 0.00 39367.53 3680.10 58478.28 00:20:29.867 [2024-11-20T13:40:29.294Z] =================================================================================================================== 00:20:29.867 [2024-11-20T13:40:29.294Z] Total : 24069.77 94.02 0.00 0.00 63306.62 3175.98 73400.32 00:20:30.431 00:20:30.431 real 0m6.568s 00:20:30.431 user 0m10.463s 00:20:30.431 sys 0m1.724s 00:20:30.431 13:40:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.431 13:40:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:30.431 ************************************ 00:20:30.431 END TEST bdev_verify 00:20:30.431 ************************************ 00:20:30.431 13:40:29 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:30.431 13:40:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:30.431 13:40:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.431 13:40:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:30.431 ************************************ 00:20:30.431 START TEST bdev_verify_big_io 00:20:30.431 ************************************ 00:20:30.431 13:40:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:30.688 [2024-11-20 13:40:29.884944] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:30.688 [2024-11-20 13:40:29.885086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73293 ] 00:20:30.688 [2024-11-20 13:40:30.045292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:30.945 [2024-11-20 13:40:30.148301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.945 [2024-11-20 13:40:30.148689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.203 Running I/O for 5 seconds... 00:20:36.917 1384.00 IOPS, 86.50 MiB/s [2024-11-20T13:40:36.910Z] 2348.50 IOPS, 146.78 MiB/s [2024-11-20T13:40:36.910Z] 2609.33 IOPS, 163.08 MiB/s 00:20:37.483 Latency(us) 00:20:37.483 [2024-11-20T13:40:36.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.483 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0x8000 00:20:37.483 nvme0n1 : 6.03 125.95 7.87 0.00 0.00 995267.17 7511.43 1167952.34 00:20:37.483 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x8000 length 0x8000 00:20:37.483 nvme0n1 : 5.85 78.91 4.93 0.00 0.00 1552000.72 214554.78 2994087.78 00:20:37.483 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0x8000 00:20:37.483 nvme0n2 : 6.04 103.38 6.46 0.00 0.00 1174492.05 31457.28 1284102.30 00:20:37.483 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x8000 length 0x8000 00:20:37.483 nvme0n2 : 6.14 104.20 6.51 0.00 0.00 1138283.76 8217.21 993727.41 00:20:37.483 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0x8000 00:20:37.483 nvme0n3 : 6.01 117.23 7.33 0.00 0.00 990990.25 130668.70 1348630.06 00:20:37.483 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x8000 length 0x8000 00:20:37.483 nvme0n3 : 5.97 101.20 6.33 0.00 0.00 1140699.65 134701.69 2632732.36 00:20:37.483 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0x2000 00:20:37.483 nvme1n1 : 6.03 111.46 6.97 0.00 0.00 1015327.88 20366.57 1458327.24 00:20:37.483 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x2000 length 0x2000 00:20:37.483 nvme1n1 : 6.03 99.57 6.22 0.00 0.00 1121185.86 59284.87 1935832.62 00:20:37.483 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0xa000 00:20:37.483 nvme2n1 : 6.03 124.69 7.79 0.00 0.00 875014.74 18652.55 1568024.42 00:20:37.483 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0xa000 length 0xa000 00:20:37.483 nvme2n1 : 6.12 133.26 8.33 0.00 0.00 807332.10 46580.97 1180857.90 00:20:37.483 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0x0 length 0xbd0b 00:20:37.483 nvme3n1 : 6.15 119.77 7.49 0.00 0.00 874582.01 151.24 1619646.62 00:20:37.483 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:37.483 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:37.483 nvme3n1 : 6.15 184.98 11.56 0.00 0.00 570011.89 1027.15 1032444.06 00:20:37.483 [2024-11-20T13:40:36.910Z] =================================================================================================================== 00:20:37.483 [2024-11-20T13:40:36.910Z] Total : 1404.60 87.79 0.00 0.00 973910.14 151.24 2994087.78 00:20:38.444 00:20:38.444 real 0m7.849s 00:20:38.444 user 0m14.540s 00:20:38.444 sys 0m0.394s 00:20:38.444 13:40:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.444 13:40:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.444 ************************************ 00:20:38.444 END TEST bdev_verify_big_io 00:20:38.444 ************************************ 00:20:38.444 13:40:37 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.444 13:40:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:38.444 13:40:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.444 13:40:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:38.444 ************************************ 00:20:38.444 START TEST bdev_write_zeroes 00:20:38.444 ************************************ 00:20:38.444 13:40:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.444 [2024-11-20 13:40:37.775166] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:38.444 [2024-11-20 13:40:37.775289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73404 ] 00:20:38.702 [2024-11-20 13:40:37.934381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.702 [2024-11-20 13:40:38.033798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.268 Running I/O for 1 seconds... 00:20:40.203 79040.00 IOPS, 308.75 MiB/s 00:20:40.203 Latency(us) 00:20:40.203 [2024-11-20T13:40:39.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.203 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme0n1 : 1.02 11308.65 44.17 0.00 0.00 11308.77 7813.91 19761.62 00:20:40.203 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme0n2 : 1.02 11295.09 44.12 0.00 0.00 11313.73 7864.32 20164.92 00:20:40.203 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme0n3 : 1.02 11281.84 44.07 0.00 0.00 11318.86 7914.73 20568.22 00:20:40.203 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme1n1 : 1.02 11269.34 44.02 0.00 0.00 11323.62 8015.56 20870.70 00:20:40.203 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme2n1 : 1.02 11256.71 43.97 0.00 0.00 11327.80 8015.56 21273.99 00:20:40.203 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:40.203 nvme3n1 : 1.03 21665.39 84.63 0.00 0.00 5871.37 2318.97 19459.15 00:20:40.203 [2024-11-20T13:40:39.630Z] =================================================================================================================== 00:20:40.203 [2024-11-20T13:40:39.630Z] Total : 78077.02 304.99 0.00 0.00 9799.63 2318.97 21273.99 00:20:40.768 00:20:40.768 real 0m2.434s 00:20:40.768 user 0m1.682s 00:20:40.768 sys 0m0.575s 00:20:40.768 13:40:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.768 ************************************ 00:20:40.768 END TEST bdev_write_zeroes 00:20:40.768 13:40:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:40.768 ************************************ 00:20:40.768 13:40:40 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:40.768 13:40:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:40.768 13:40:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.768 13:40:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.768 ************************************ 00:20:40.768 START TEST bdev_json_nonenclosed 00:20:40.768 ************************************ 00:20:40.768 13:40:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:41.026 [2024-11-20 13:40:40.249176] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:41.026 [2024-11-20 13:40:40.249307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73451 ] 00:20:41.026 [2024-11-20 13:40:40.409208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.283 [2024-11-20 13:40:40.508992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.283 [2024-11-20 13:40:40.509077] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:41.283 [2024-11-20 13:40:40.509095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:41.283 [2024-11-20 13:40:40.509103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:41.283 00:20:41.283 real 0m0.503s 00:20:41.283 user 0m0.311s 00:20:41.283 sys 0m0.089s 00:20:41.283 13:40:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.283 13:40:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:41.283 ************************************ 00:20:41.283 END TEST bdev_json_nonenclosed 00:20:41.283 ************************************ 00:20:41.541 13:40:40 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:41.541 13:40:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:41.541 13:40:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.541 13:40:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.541 ************************************ 00:20:41.541 START TEST bdev_json_nonarray 00:20:41.541 ************************************ 00:20:41.541 13:40:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:41.541 [2024-11-20 13:40:40.792364] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:41.541 [2024-11-20 13:40:40.792486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73471 ] 00:20:41.541 [2024-11-20 13:40:40.951163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.798 [2024-11-20 13:40:41.050613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.799 [2024-11-20 13:40:41.050706] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:41.799 [2024-11-20 13:40:41.050723] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:41.799 [2024-11-20 13:40:41.050732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:42.056 00:20:42.056 real 0m0.546s 00:20:42.056 user 0m0.352s 00:20:42.056 sys 0m0.090s 00:20:42.056 13:40:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.056 13:40:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:42.056 ************************************ 00:20:42.056 END TEST bdev_json_nonarray 00:20:42.056 ************************************ 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:42.056 13:40:41 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:42.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.153 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.153 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.153 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.433 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.433 ************************************ 00:21:53.433 END TEST blockdev_xnvme 00:21:53.433 ************************************ 00:21:53.433 00:21:53.433 real 1m59.130s 00:21:53.433 user 1m25.896s 00:21:53.433 sys 1m49.890s 00:21:53.433 13:41:52 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.433 13:41:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.433 13:41:52 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:53.433 13:41:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.433 13:41:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.433 13:41:52 -- common/autotest_common.sh@10 -- # set +x 00:21:53.433 ************************************ 00:21:53.433 START TEST ublk 00:21:53.433 ************************************ 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:53.433 * Looking for test storage... 00:21:53.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.433 13:41:52 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.433 13:41:52 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.433 13:41:52 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.433 13:41:52 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.433 13:41:52 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.433 13:41:52 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:53.433 13:41:52 ublk -- scripts/common.sh@345 -- # : 1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.433 13:41:52 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.433 13:41:52 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@353 -- # local d=1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.433 13:41:52 ublk -- scripts/common.sh@355 -- # echo 1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.433 13:41:52 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@353 -- # local d=2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.433 13:41:52 ublk -- scripts/common.sh@355 -- # echo 2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.433 13:41:52 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.433 13:41:52 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.433 13:41:52 ublk -- scripts/common.sh@368 -- # return 0 00:21:53.433 13:41:52 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.434 --rc genhtml_branch_coverage=1 00:21:53.434 --rc genhtml_function_coverage=1 00:21:53.434 --rc genhtml_legend=1 00:21:53.434 --rc geninfo_all_blocks=1 00:21:53.434 --rc geninfo_unexecuted_blocks=1 00:21:53.434 00:21:53.434 ' 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.434 --rc genhtml_branch_coverage=1 00:21:53.434 --rc genhtml_function_coverage=1 00:21:53.434 --rc genhtml_legend=1 00:21:53.434 --rc geninfo_all_blocks=1 00:21:53.434 --rc geninfo_unexecuted_blocks=1 00:21:53.434 00:21:53.434 ' 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.434 --rc genhtml_branch_coverage=1 00:21:53.434 --rc genhtml_function_coverage=1 00:21:53.434 --rc genhtml_legend=1 00:21:53.434 --rc geninfo_all_blocks=1 00:21:53.434 --rc geninfo_unexecuted_blocks=1 00:21:53.434 00:21:53.434 ' 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.434 --rc genhtml_branch_coverage=1 00:21:53.434 --rc genhtml_function_coverage=1 00:21:53.434 --rc genhtml_legend=1 00:21:53.434 --rc geninfo_all_blocks=1 00:21:53.434 --rc geninfo_unexecuted_blocks=1 00:21:53.434 00:21:53.434 ' 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:53.434 13:41:52 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:53.434 13:41:52 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:53.434 13:41:52 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:53.434 13:41:52 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:53.434 13:41:52 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:53.434 13:41:52 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:53.434 13:41:52 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:53.434 13:41:52 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:53.434 13:41:52 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.434 13:41:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:53.434 ************************************ 00:21:53.434 START TEST test_save_ublk_config 00:21:53.434 ************************************ 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73792 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73792 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73792 ']' 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.434 13:41:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:53.434 [2024-11-20 13:41:52.600543] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:53.434 [2024-11-20 13:41:52.600814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73792 ] 00:21:53.434 [2024-11-20 13:41:52.757455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.692 [2024-11-20 13:41:52.891314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:54.257 [2024-11-20 13:41:53.532996] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:54.257 [2024-11-20 13:41:53.534019] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:54.257 malloc0 00:21:54.257 [2024-11-20 13:41:53.597375] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:54.257 [2024-11-20 13:41:53.597458] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:54.257 [2024-11-20 13:41:53.597467] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:54.257 [2024-11-20 13:41:53.597474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:54.257 [2024-11-20 13:41:53.604204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:54.257 [2024-11-20 13:41:53.604236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:54.257 [2024-11-20 13:41:53.613010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:54.257 [2024-11-20 13:41:53.613124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:54.257 [2024-11-20 13:41:53.634859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:54.257 0 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.257 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:54.515 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.515 13:41:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:54.515 "subsystems": [ 00:21:54.515 { 00:21:54.515 "subsystem": "fsdev", 00:21:54.515 "config": [ 00:21:54.515 { 00:21:54.515 "method": "fsdev_set_opts", 00:21:54.515 "params": { 00:21:54.515 "fsdev_io_pool_size": 65535, 00:21:54.515 "fsdev_io_cache_size": 256 00:21:54.515 } 00:21:54.515 } 00:21:54.515 ] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "keyring", 00:21:54.515 "config": [] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "iobuf", 00:21:54.515 "config": [ 00:21:54.515 { 00:21:54.515 "method": "iobuf_set_options", 00:21:54.515 "params": { 00:21:54.515 "small_pool_count": 8192, 00:21:54.515 "large_pool_count": 1024, 00:21:54.515 "small_bufsize": 8192, 00:21:54.515 "large_bufsize": 135168, 00:21:54.515 "enable_numa": false 00:21:54.515 } 00:21:54.515 } 00:21:54.515 ] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "sock", 00:21:54.515 "config": [ 00:21:54.515 { 00:21:54.515 "method": "sock_set_default_impl", 00:21:54.515 "params": { 00:21:54.515 "impl_name": "posix" 00:21:54.515 } 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "method": "sock_impl_set_options", 00:21:54.515 "params": { 00:21:54.515 "impl_name": "ssl", 00:21:54.515 "recv_buf_size": 4096, 00:21:54.515 "send_buf_size": 4096, 00:21:54.515 "enable_recv_pipe": true, 00:21:54.515 "enable_quickack": false, 00:21:54.515 "enable_placement_id": 0, 00:21:54.515 "enable_zerocopy_send_server": true, 00:21:54.515 "enable_zerocopy_send_client": false, 00:21:54.515 "zerocopy_threshold": 0, 00:21:54.515 "tls_version": 0, 00:21:54.515 "enable_ktls": false 00:21:54.515 } 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "method": "sock_impl_set_options", 00:21:54.515 "params": { 00:21:54.515 "impl_name": "posix", 00:21:54.515 "recv_buf_size": 2097152, 00:21:54.515 "send_buf_size": 2097152, 00:21:54.515 "enable_recv_pipe": true, 00:21:54.515 "enable_quickack": false, 00:21:54.515 "enable_placement_id": 0, 00:21:54.515 "enable_zerocopy_send_server": true, 00:21:54.515 "enable_zerocopy_send_client": false, 00:21:54.515 "zerocopy_threshold": 0, 00:21:54.515 "tls_version": 0, 00:21:54.515 "enable_ktls": false 00:21:54.515 } 00:21:54.515 } 00:21:54.515 ] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "vmd", 00:21:54.515 "config": [] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "accel", 00:21:54.515 "config": [ 00:21:54.515 { 00:21:54.515 "method": "accel_set_options", 00:21:54.515 "params": { 00:21:54.515 "small_cache_size": 128, 00:21:54.515 "large_cache_size": 16, 00:21:54.515 "task_count": 2048, 00:21:54.515 "sequence_count": 2048, 00:21:54.515 "buf_count": 2048 00:21:54.515 } 00:21:54.515 } 00:21:54.515 ] 00:21:54.515 }, 00:21:54.515 { 00:21:54.515 "subsystem": "bdev", 00:21:54.515 "config": [ 00:21:54.515 { 00:21:54.515 "method": "bdev_set_options", 00:21:54.515 "params": { 00:21:54.515 "bdev_io_pool_size": 65535, 00:21:54.515 "bdev_io_cache_size": 256, 00:21:54.516 "bdev_auto_examine": true, 00:21:54.516 "iobuf_small_cache_size": 128, 00:21:54.516 "iobuf_large_cache_size": 16 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_raid_set_options", 00:21:54.516 "params": { 00:21:54.516 "process_window_size_kb": 1024, 00:21:54.516 "process_max_bandwidth_mb_sec": 0 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_iscsi_set_options", 00:21:54.516 "params": { 00:21:54.516 "timeout_sec": 30 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_nvme_set_options", 00:21:54.516 "params": { 00:21:54.516 "action_on_timeout": "none", 00:21:54.516 "timeout_us": 0, 00:21:54.516 "timeout_admin_us": 0, 00:21:54.516 "keep_alive_timeout_ms": 10000, 00:21:54.516 "arbitration_burst": 0, 00:21:54.516 "low_priority_weight": 0, 00:21:54.516 "medium_priority_weight": 0, 00:21:54.516 "high_priority_weight": 0, 00:21:54.516 "nvme_adminq_poll_period_us": 10000, 00:21:54.516 "nvme_ioq_poll_period_us": 0, 00:21:54.516 "io_queue_requests": 0, 00:21:54.516 "delay_cmd_submit": true, 00:21:54.516 "transport_retry_count": 4, 00:21:54.516 "bdev_retry_count": 3, 00:21:54.516 "transport_ack_timeout": 0, 00:21:54.516 "ctrlr_loss_timeout_sec": 0, 00:21:54.516 "reconnect_delay_sec": 0, 00:21:54.516 "fast_io_fail_timeout_sec": 0, 00:21:54.516 "disable_auto_failback": false, 00:21:54.516 "generate_uuids": false, 00:21:54.516 "transport_tos": 0, 00:21:54.516 "nvme_error_stat": false, 00:21:54.516 "rdma_srq_size": 0, 00:21:54.516 "io_path_stat": false, 00:21:54.516 "allow_accel_sequence": false, 00:21:54.516 "rdma_max_cq_size": 0, 00:21:54.516 "rdma_cm_event_timeout_ms": 0, 00:21:54.516 "dhchap_digests": [ 00:21:54.516 "sha256", 00:21:54.516 "sha384", 00:21:54.516 "sha512" 00:21:54.516 ], 00:21:54.516 "dhchap_dhgroups": [ 00:21:54.516 "null", 00:21:54.516 "ffdhe2048", 00:21:54.516 "ffdhe3072", 00:21:54.516 "ffdhe4096", 00:21:54.516 "ffdhe6144", 00:21:54.516 "ffdhe8192" 00:21:54.516 ] 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_nvme_set_hotplug", 00:21:54.516 "params": { 00:21:54.516 "period_us": 100000, 00:21:54.516 "enable": false 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_malloc_create", 00:21:54.516 "params": { 00:21:54.516 "name": "malloc0", 00:21:54.516 "num_blocks": 8192, 00:21:54.516 "block_size": 4096, 00:21:54.516 "physical_block_size": 4096, 00:21:54.516 "uuid": "df48d7f0-c60a-4223-9a82-11173f2ade99", 00:21:54.516 "optimal_io_boundary": 0, 00:21:54.516 "md_size": 0, 00:21:54.516 "dif_type": 0, 00:21:54.516 "dif_is_head_of_md": false, 00:21:54.516 "dif_pi_format": 0 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "bdev_wait_for_examine" 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "scsi", 00:21:54.516 "config": null 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "scheduler", 00:21:54.516 "config": [ 00:21:54.516 { 00:21:54.516 "method": "framework_set_scheduler", 00:21:54.516 "params": { 00:21:54.516 "name": "static" 00:21:54.516 } 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "vhost_scsi", 00:21:54.516 "config": [] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "vhost_blk", 00:21:54.516 "config": [] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "ublk", 00:21:54.516 "config": [ 00:21:54.516 { 00:21:54.516 "method": "ublk_create_target", 00:21:54.516 "params": { 00:21:54.516 "cpumask": "1" 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "ublk_start_disk", 00:21:54.516 "params": { 00:21:54.516 "bdev_name": "malloc0", 00:21:54.516 "ublk_id": 0, 00:21:54.516 "num_queues": 1, 00:21:54.516 "queue_depth": 128 00:21:54.516 } 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "nbd", 00:21:54.516 "config": [] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "nvmf", 00:21:54.516 "config": [ 00:21:54.516 { 00:21:54.516 "method": "nvmf_set_config", 00:21:54.516 "params": { 00:21:54.516 "discovery_filter": "match_any", 00:21:54.516 "admin_cmd_passthru": { 00:21:54.516 "identify_ctrlr": false 00:21:54.516 }, 00:21:54.516 "dhchap_digests": [ 00:21:54.516 "sha256", 00:21:54.516 "sha384", 00:21:54.516 "sha512" 00:21:54.516 ], 00:21:54.516 "dhchap_dhgroups": [ 00:21:54.516 "null", 00:21:54.516 "ffdhe2048", 00:21:54.516 "ffdhe3072", 00:21:54.516 "ffdhe4096", 00:21:54.516 "ffdhe6144", 00:21:54.516 "ffdhe8192" 00:21:54.516 ] 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "nvmf_set_max_subsystems", 00:21:54.516 "params": { 00:21:54.516 "max_subsystems": 1024 00:21:54.516 } 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "method": "nvmf_set_crdt", 00:21:54.516 "params": { 00:21:54.516 "crdt1": 0, 00:21:54.516 "crdt2": 0, 00:21:54.516 "crdt3": 0 00:21:54.516 } 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 }, 00:21:54.516 { 00:21:54.516 "subsystem": "iscsi", 00:21:54.516 "config": [ 00:21:54.516 { 00:21:54.516 "method": "iscsi_set_options", 00:21:54.516 "params": { 00:21:54.516 "node_base": "iqn.2016-06.io.spdk", 00:21:54.516 "max_sessions": 128, 00:21:54.516 "max_connections_per_session": 2, 00:21:54.516 "max_queue_depth": 64, 00:21:54.516 "default_time2wait": 2, 00:21:54.516 "default_time2retain": 20, 00:21:54.516 "first_burst_length": 8192, 00:21:54.516 "immediate_data": true, 00:21:54.516 "allow_duplicated_isid": false, 00:21:54.516 "error_recovery_level": 0, 00:21:54.516 "nop_timeout": 60, 00:21:54.516 "nop_in_interval": 30, 00:21:54.516 "disable_chap": false, 00:21:54.516 "require_chap": false, 00:21:54.516 "mutual_chap": false, 00:21:54.516 "chap_group": 0, 00:21:54.516 "max_large_datain_per_connection": 64, 00:21:54.516 "max_r2t_per_connection": 4, 00:21:54.516 "pdu_pool_size": 36864, 00:21:54.516 "immediate_data_pool_size": 16384, 00:21:54.516 "data_out_pool_size": 2048 00:21:54.516 } 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 } 00:21:54.516 ] 00:21:54.516 }' 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73792 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73792 ']' 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73792 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.516 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73792 00:21:54.774 killing process with pid 73792 00:21:54.774 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.774 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.774 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73792' 00:21:54.774 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73792 00:21:54.774 13:41:53 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73792 00:21:55.706 [2024-11-20 13:41:54.990602] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:55.706 [2024-11-20 13:41:55.027051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:55.706 [2024-11-20 13:41:55.027207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:55.706 [2024-11-20 13:41:55.030410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:55.706 [2024-11-20 13:41:55.030485] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:55.706 [2024-11-20 13:41:55.030497] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:55.706 [2024-11-20 13:41:55.030523] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:55.706 [2024-11-20 13:41:55.030661] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:57.119 13:41:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:57.119 13:41:56 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73848 00:21:57.119 13:41:56 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73848 00:21:57.119 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73848 ']' 00:21:57.119 13:41:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:57.119 "subsystems": [ 00:21:57.119 { 00:21:57.119 "subsystem": "fsdev", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "fsdev_set_opts", 00:21:57.119 "params": { 00:21:57.119 "fsdev_io_pool_size": 65535, 00:21:57.119 "fsdev_io_cache_size": 256 00:21:57.119 } 00:21:57.119 } 00:21:57.119 ] 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "subsystem": "keyring", 00:21:57.119 "config": [] 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "subsystem": "iobuf", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "iobuf_set_options", 00:21:57.119 "params": { 00:21:57.119 "small_pool_count": 8192, 00:21:57.119 "large_pool_count": 1024, 00:21:57.119 "small_bufsize": 8192, 00:21:57.119 "large_bufsize": 135168, 00:21:57.119 "enable_numa": false 00:21:57.119 } 00:21:57.119 } 00:21:57.119 ] 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "subsystem": "sock", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "sock_set_default_impl", 00:21:57.119 "params": { 00:21:57.119 "impl_name": "posix" 00:21:57.119 } 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "method": "sock_impl_set_options", 00:21:57.119 "params": { 00:21:57.119 "impl_name": "ssl", 00:21:57.119 "recv_buf_size": 4096, 00:21:57.119 "send_buf_size": 4096, 00:21:57.119 "enable_recv_pipe": true, 00:21:57.119 "enable_quickack": false, 00:21:57.119 "enable_placement_id": 0, 00:21:57.119 "enable_zerocopy_send_server": true, 00:21:57.119 "enable_zerocopy_send_client": false, 00:21:57.119 "zerocopy_threshold": 0, 00:21:57.119 "tls_version": 0, 00:21:57.119 "enable_ktls": false 00:21:57.119 } 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "method": "sock_impl_set_options", 00:21:57.119 "params": { 00:21:57.119 "impl_name": "posix", 00:21:57.119 "recv_buf_size": 2097152, 00:21:57.119 "send_buf_size": 2097152, 00:21:57.119 "enable_recv_pipe": true, 00:21:57.119 "enable_quickack": false, 00:21:57.119 "enable_placement_id": 0, 00:21:57.119 "enable_zerocopy_send_server": true, 00:21:57.119 "enable_zerocopy_send_client": false, 00:21:57.119 "zerocopy_threshold": 0, 00:21:57.119 "tls_version": 0, 00:21:57.119 "enable_ktls": false 00:21:57.119 } 00:21:57.119 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "vmd", 00:21:57.120 "config": [] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "accel", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "accel_set_options", 00:21:57.120 "params": { 00:21:57.120 "small_cache_size": 128, 00:21:57.120 "large_cache_size": 16, 00:21:57.120 "task_count": 2048, 00:21:57.120 "sequence_count": 2048, 00:21:57.120 "buf_count": 2048 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "bdev", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "bdev_set_options", 00:21:57.120 "params": { 00:21:57.120 "bdev_io_pool_size": 65535, 00:21:57.120 "bdev_io_cache_size": 256, 00:21:57.120 "bdev_auto_examine": true, 00:21:57.120 "iobuf_small_cache_size": 128, 00:21:57.120 "iobuf_large_cache_size": 16 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_raid_set_options", 00:21:57.120 "params": { 00:21:57.120 "process_window_size_kb": 1024, 00:21:57.120 "process_max_bandwidth_mb_sec": 0 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_iscsi_set_options", 00:21:57.120 "params": { 00:21:57.120 "timeout_sec": 30 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_nvme_set_options", 00:21:57.120 "params": { 00:21:57.120 "action_on_timeout": "none", 00:21:57.120 "timeout_us": 0, 00:21:57.120 "timeout_admin_us": 0, 00:21:57.120 "keep_alive_timeout_ms": 10000, 00:21:57.120 "arbitration_burst": 0, 00:21:57.120 "low_priority_weight": 0, 00:21:57.120 "medium_priority_weight": 0, 00:21:57.120 "high_priority_weight": 0, 00:21:57.120 "nvme_adminq_poll_period_us": 10000, 00:21:57.120 "nvme_ioq_poll_period_us": 0, 00:21:57.120 "io_queue_requests": 0, 00:21:57.120 "delay_cmd_submit": true, 00:21:57.120 "transport_retry_count": 4, 00:21:57.120 "bdev_retry_count": 3, 00:21:57.120 "transport_ack_timeout": 0, 00:21:57.120 "ctrlr_loss_timeout_sec": 0, 00:21:57.120 "reconnect_delay_sec": 0, 00:21:57.120 "fast_io_fail_timeout_sec": 0, 00:21:57.120 "disable_auto_failback": false, 00:21:57.120 "generate_uuids": false, 00:21:57.120 "transport_tos": 0, 00:21:57.120 "nvme_error_stat": false, 00:21:57.120 "rdma_srq_size": 0, 00:21:57.120 "io_path_stat": false, 00:21:57.120 "allow_accel_sequence": false, 00:21:57.120 "rdma_max_cq_size": 0, 00:21:57.120 "rdma_cm_event_timeout_ms": 0, 00:21:57.120 "dhchap_digests": [ 00:21:57.120 "sha256", 00:21:57.120 "sha384", 00:21:57.120 "sha512" 00:21:57.120 ], 00:21:57.120 "dhchap_dhgroups": [ 00:21:57.120 "null", 00:21:57.120 "ffdhe2048", 00:21:57.120 "ffdhe3072", 00:21:57.120 "ffdhe4096", 00:21:57.120 "ffdhe6144", 00:21:57.120 "ffdhe8192" 00:21:57.120 ] 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_nvme_set_hotplug", 00:21:57.120 "params": { 00:21:57.120 "period_us": 100000, 00:21:57.120 "enable": false 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_malloc_create", 00:21:57.120 "params": { 00:21:57.120 "name": "malloc0", 00:21:57.120 "num_blocks": 8192, 00:21:57.120 "block_size": 4096, 00:21:57.120 "physical_block_size": 4096, 00:21:57.120 "uuid": "df48d7f0-c60a-4223-9a82-11173f2ade99", 00:21:57.120 "optimal_io_boundary": 0, 00:21:57.120 "md_size": 0, 00:21:57.120 "dif_type": 0, 00:21:57.120 "dif_is_head_of_md": false, 00:21:57.120 "dif_pi_format": 0 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_wait_for_examine" 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "scsi", 00:21:57.120 "config": null 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "scheduler", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "framework_set_scheduler", 00:21:57.120 "params": { 00:21:57.120 "name": "static" 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "vhost_scsi", 00:21:57.120 "config": [] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "vhost_blk", 00:21:57.120 "config": [] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "ublk", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "ublk_create_target", 00:21:57.120 "params": { 00:21:57.120 "cpumask": "1" 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "ublk_start_disk", 00:21:57.120 "params": { 00:21:57.120 "bdev_name": "malloc0", 00:21:57.120 "ublk_id": 0, 00:21:57.120 "num_queues": 1, 00:21:57.120 "queue_depth": 128 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "nbd", 00:21:57.120 "config": [] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "nvmf", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "nvmf_set_config", 00:21:57.120 "params": { 00:21:57.120 "discovery_filter": "match_any", 00:21:57.120 "admin_cmd_passthru": { 00:21:57.120 "identify_ctrlr": false 00:21:57.120 }, 00:21:57.120 "dhchap_digests": [ 00:21:57.120 "sha256", 00:21:57.120 "sha384", 00:21:57.120 "sha512" 00:21:57.120 ], 00:21:57.120 "dhchap_dhgroups": [ 00:21:57.120 "null", 00:21:57.120 "ffdhe2048", 00:21:57.120 "ffdhe3072", 00:21:57.120 "ffdhe4096", 00:21:57.120 "ffdhe6144", 00:21:57.120 "ffdhe8192" 00:21:57.120 ] 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "nvmf_set_max_subsystems", 00:21:57.120 "params": { 00:21:57.120 "max_subsystems": 1024 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "nvmf_set_crdt", 00:21:57.120 "params": { 00:21:57.120 "crdt1": 0, 00:21:57.120 "crdt2": 0, 00:21:57.120 "crdt3": 0 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "iscsi", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "iscsi_set_options", 00:21:57.120 "params": { 00:21:57.120 "node_base": "iqn.2016-06.io.spdk", 00:21:57.120 "max_sessions": 128, 00:21:57.120 "max_connections_per_session": 2, 00:21:57.120 "max_queue_depth": 64, 00:21:57.120 "default_time2wait": 2, 00:21:57.120 "default_time2retain": 20, 00:21:57.120 "first_burst_length": 8192, 00:21:57.120 "immediate_data": true, 00:21:57.120 "allow_duplicated_isid": false, 00:21:57.120 "error_recovery_level": 0, 00:21:57.120 "nop_timeout": 60, 00:21:57.120 "nop_in_interval": 30, 00:21:57.120 "disable_chap": false, 00:21:57.120 "require_chap": false, 00:21:57.120 "mutual_chap": false, 00:21:57.120 "chap_group": 0, 00:21:57.120 "max_large_datain_per_connection": 64, 00:21:57.120 "max_r2t_per_connection": 4, 00:21:57.120 "pdu_pool_size": 36864, 00:21:57.120 "immediate_data_pool_size": 16384, 00:21:57.120 "data_out_pool_size": 2048 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }' 00:21:57.120 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.120 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.121 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.121 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.121 13:41:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:57.378 [2024-11-20 13:41:56.561984] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:57.378 [2024-11-20 13:41:56.562141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73848 ] 00:21:57.378 [2024-11-20 13:41:56.734218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.635 [2024-11-20 13:41:56.840247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.199 [2024-11-20 13:41:57.605988] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:58.199 [2024-11-20 13:41:57.606805] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:58.199 [2024-11-20 13:41:57.614110] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:58.199 [2024-11-20 13:41:57.614188] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:58.199 [2024-11-20 13:41:57.614198] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:58.199 [2024-11-20 13:41:57.614205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:58.199 [2024-11-20 13:41:57.622115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:58.199 [2024-11-20 13:41:57.622139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:58.456 [2024-11-20 13:41:57.629997] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:58.456 [2024-11-20 13:41:57.630098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:58.456 [2024-11-20 13:41:57.646993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73848 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73848 ']' 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73848 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73848 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.456 killing process with pid 73848 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73848' 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73848 00:21:58.456 13:41:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73848 00:21:59.831 [2024-11-20 13:41:58.921929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:59.831 [2024-11-20 13:41:58.952067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:59.831 [2024-11-20 13:41:58.952193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:59.831 [2024-11-20 13:41:58.958999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:59.831 [2024-11-20 13:41:58.959048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:59.831 [2024-11-20 13:41:58.959056] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:59.831 [2024-11-20 13:41:58.959080] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:59.831 [2024-11-20 13:41:58.959216] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:01.221 13:42:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:01.221 00:22:01.221 real 0m7.818s 00:22:01.221 user 0m5.702s 00:22:01.221 sys 0m2.824s 00:22:01.221 ************************************ 00:22:01.221 END TEST test_save_ublk_config 00:22:01.221 ************************************ 00:22:01.221 13:42:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.221 13:42:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:01.221 13:42:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:01.221 13:42:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73926 00:22:01.221 13:42:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.221 13:42:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73926 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@835 -- # '[' -z 73926 ']' 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.221 13:42:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:01.221 [2024-11-20 13:42:00.444101] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:01.221 [2024-11-20 13:42:00.444212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73926 ] 00:22:01.221 [2024-11-20 13:42:00.597873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:01.479 [2024-11-20 13:42:00.703405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.479 [2024-11-20 13:42:00.703713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.044 13:42:01 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.044 13:42:01 ublk -- common/autotest_common.sh@868 -- # return 0 00:22:02.044 13:42:01 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:02.044 13:42:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:02.044 13:42:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.044 13:42:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.044 ************************************ 00:22:02.044 START TEST test_create_ublk 00:22:02.044 ************************************ 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:22:02.044 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.044 [2024-11-20 13:42:01.333995] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:02.044 [2024-11-20 13:42:01.335917] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.044 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:02.044 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.044 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.303 [2024-11-20 13:42:01.548152] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:02.303 [2024-11-20 13:42:01.548546] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:02.303 [2024-11-20 13:42:01.548564] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:02.303 [2024-11-20 13:42:01.548572] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:02.303 [2024-11-20 13:42:01.556016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:02.303 [2024-11-20 13:42:01.556045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:02.303 [2024-11-20 13:42:01.564009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:02.303 [2024-11-20 13:42:01.564654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:02.303 [2024-11-20 13:42:01.595020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.303 13:42:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:02.303 { 00:22:02.303 "ublk_device": "/dev/ublkb0", 00:22:02.303 "id": 0, 00:22:02.303 "queue_depth": 512, 00:22:02.303 "num_queues": 4, 00:22:02.303 "bdev_name": "Malloc0" 00:22:02.303 } 00:22:02.303 ]' 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:02.303 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:02.561 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:02.561 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:02.561 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:02.562 13:42:01 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:02.562 13:42:01 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:02.562 fio: verification read phase will never start because write phase uses all of runtime 00:22:02.562 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:02.562 fio-3.35 00:22:02.562 Starting 1 process 00:22:14.753 00:22:14.753 fio_test: (groupid=0, jobs=1): err= 0: pid=73965: Wed Nov 20 13:42:12 2024 00:22:14.753 write: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(608MiB/10001msec); 0 zone resets 00:22:14.753 clat (usec): min=40, max=9755, avg=63.40, stdev=128.03 00:22:14.753 lat (usec): min=40, max=9770, avg=63.89, stdev=128.05 00:22:14.753 clat percentiles (usec): 00:22:14.753 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 52], 00:22:14.753 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:22:14.753 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 69], 95.00th=[ 75], 00:22:14.753 | 99.00th=[ 91], 99.50th=[ 104], 99.90th=[ 2835], 99.95th=[ 3392], 00:22:14.753 | 99.99th=[ 3752] 00:22:14.753 bw ( KiB/s): min=24678, max=73808, per=100.00%, avg=62218.42, stdev=10196.58, samples=19 00:22:14.753 iops : min= 6169, max=18452, avg=15554.58, stdev=2549.25, samples=19 00:22:14.753 lat (usec) : 50=12.45%, 100=86.97%, 250=0.30%, 500=0.06%, 750=0.01% 00:22:14.753 lat (usec) : 1000=0.02% 00:22:14.753 lat (msec) : 2=0.05%, 4=0.14%, 10=0.01% 00:22:14.753 cpu : usr=2.62%, sys=14.57%, ctx=155561, majf=0, minf=796 00:22:14.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.753 issued rwts: total=0,155557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:14.753 00:22:14.753 Run status group 0 (all jobs): 00:22:14.753 WRITE: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=608MiB (637MB), run=10001-10001msec 00:22:14.753 00:22:14.753 Disk stats (read/write): 00:22:14.753 ublkb0: ios=0/153860, merge=0/0, ticks=0/8158, in_queue=8159, util=99.07% 00:22:14.753 13:42:12 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 [2024-11-20 13:42:12.035401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:14.753 [2024-11-20 13:42:12.075001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:14.753 [2024-11-20 13:42:12.075662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:14.753 [2024-11-20 13:42:12.083101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:14.753 [2024-11-20 13:42:12.083447] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:14.753 [2024-11-20 13:42:12.083544] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 [2024-11-20 13:42:12.099063] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:14.753 request: 00:22:14.753 { 00:22:14.753 "ublk_id": 0, 00:22:14.753 "method": "ublk_stop_disk", 00:22:14.753 "req_id": 1 00:22:14.753 } 00:22:14.753 Got JSON-RPC error response 00:22:14.753 response: 00:22:14.753 { 00:22:14.753 "code": -19, 00:22:14.753 "message": "No such device" 00:22:14.753 } 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.753 13:42:12 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 [2024-11-20 13:42:12.115088] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:14.753 [2024-11-20 13:42:12.118835] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:14.753 [2024-11-20 13:42:12.118876] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:14.753 ************************************ 00:22:14.753 END TEST test_create_ublk 00:22:14.753 ************************************ 00:22:14.753 13:42:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:14.753 00:22:14.753 real 0m11.270s 00:22:14.753 user 0m0.589s 00:22:14.753 sys 0m1.538s 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 13:42:12 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:14.753 13:42:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:14.753 13:42:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.753 13:42:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 ************************************ 00:22:14.753 START TEST test_create_multi_ublk 00:22:14.753 ************************************ 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.753 [2024-11-20 13:42:12.645986] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:14.753 [2024-11-20 13:42:12.647554] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.753 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 [2024-11-20 13:42:12.874122] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:14.754 [2024-11-20 13:42:12.874457] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:14.754 [2024-11-20 13:42:12.874470] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:14.754 [2024-11-20 13:42:12.874479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:14.754 [2024-11-20 13:42:12.886220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:14.754 [2024-11-20 13:42:12.886245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:14.754 [2024-11-20 13:42:12.897995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:14.754 [2024-11-20 13:42:12.898527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:14.754 [2024-11-20 13:42:12.937994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 [2024-11-20 13:42:13.154094] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:14.754 [2024-11-20 13:42:13.154399] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:14.754 [2024-11-20 13:42:13.154412] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:14.754 [2024-11-20 13:42:13.154418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:14.754 [2024-11-20 13:42:13.162010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:14.754 [2024-11-20 13:42:13.162027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:14.754 [2024-11-20 13:42:13.169994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:14.754 [2024-11-20 13:42:13.170511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:14.754 [2024-11-20 13:42:13.182002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 [2024-11-20 13:42:13.349124] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:14.754 [2024-11-20 13:42:13.349448] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:14.754 [2024-11-20 13:42:13.349461] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:14.754 [2024-11-20 13:42:13.349468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:14.754 [2024-11-20 13:42:13.357025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:14.754 [2024-11-20 13:42:13.357052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:14.754 [2024-11-20 13:42:13.365003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:14.754 [2024-11-20 13:42:13.365542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:14.754 [2024-11-20 13:42:13.374039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 [2024-11-20 13:42:13.549195] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:14.754 [2024-11-20 13:42:13.549627] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:14.754 [2024-11-20 13:42:13.549656] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:14.754 [2024-11-20 13:42:13.549667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:14.754 [2024-11-20 13:42:13.557287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:14.754 [2024-11-20 13:42:13.557315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:14.754 [2024-11-20 13:42:13.565021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:14.754 [2024-11-20 13:42:13.565733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:14.754 [2024-11-20 13:42:13.574052] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:14.754 { 00:22:14.754 "ublk_device": "/dev/ublkb0", 00:22:14.754 "id": 0, 00:22:14.754 "queue_depth": 512, 00:22:14.754 "num_queues": 4, 00:22:14.754 "bdev_name": "Malloc0" 00:22:14.754 }, 00:22:14.754 { 00:22:14.754 "ublk_device": "/dev/ublkb1", 00:22:14.754 "id": 1, 00:22:14.754 "queue_depth": 512, 00:22:14.754 "num_queues": 4, 00:22:14.754 "bdev_name": "Malloc1" 00:22:14.754 }, 00:22:14.754 { 00:22:14.754 "ublk_device": "/dev/ublkb2", 00:22:14.754 "id": 2, 00:22:14.754 "queue_depth": 512, 00:22:14.754 "num_queues": 4, 00:22:14.754 "bdev_name": "Malloc2" 00:22:14.754 }, 00:22:14.754 { 00:22:14.754 "ublk_device": "/dev/ublkb3", 00:22:14.754 "id": 3, 00:22:14.754 "queue_depth": 512, 00:22:14.754 "num_queues": 4, 00:22:14.754 "bdev_name": "Malloc3" 00:22:14.754 } 00:22:14.754 ]' 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:14.754 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:14.755 13:42:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:14.755 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:15.012 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:15.012 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:15.012 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:15.012 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.013 [2024-11-20 13:42:14.197123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:15.013 [2024-11-20 13:42:14.230482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:15.013 [2024-11-20 13:42:14.231489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:15.013 [2024-11-20 13:42:14.237006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:15.013 [2024-11-20 13:42:14.237259] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:15.013 [2024-11-20 13:42:14.237276] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.013 [2024-11-20 13:42:14.253073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:15.013 [2024-11-20 13:42:14.286487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:15.013 [2024-11-20 13:42:14.287491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:15.013 [2024-11-20 13:42:14.293102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:15.013 [2024-11-20 13:42:14.293348] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:15.013 [2024-11-20 13:42:14.293362] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.013 [2024-11-20 13:42:14.308119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:15.013 [2024-11-20 13:42:14.349374] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:15.013 [2024-11-20 13:42:14.350407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:15.013 [2024-11-20 13:42:14.357010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:15.013 [2024-11-20 13:42:14.357262] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:15.013 [2024-11-20 13:42:14.357278] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.013 [2024-11-20 13:42:14.373108] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:15.013 [2024-11-20 13:42:14.421050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:15.013 [2024-11-20 13:42:14.421674] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:15.013 [2024-11-20 13:42:14.429114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:15.013 [2024-11-20 13:42:14.429346] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:15.013 [2024-11-20 13:42:14.429360] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.013 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:15.270 [2024-11-20 13:42:14.653084] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:15.270 [2024-11-20 13:42:14.656774] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:15.270 [2024-11-20 13:42:14.656807] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:15.270 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:15.270 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.270 13:42:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:15.270 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.270 13:42:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.835 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.835 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:15.835 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.835 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.143 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.143 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:16.143 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:16.143 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.143 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:16.404 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:16.661 ************************************ 00:22:16.661 END TEST test_create_multi_ublk 00:22:16.661 ************************************ 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:16.661 00:22:16.661 real 0m3.255s 00:22:16.661 user 0m0.834s 00:22:16.661 sys 0m0.132s 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.661 13:42:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.661 13:42:15 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:16.661 13:42:15 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:16.661 13:42:15 ublk -- ublk/ublk.sh@130 -- # killprocess 73926 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@954 -- # '[' -z 73926 ']' 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@958 -- # kill -0 73926 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@959 -- # uname 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73926 00:22:16.661 killing process with pid 73926 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.661 13:42:15 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73926' 00:22:16.662 13:42:15 ublk -- common/autotest_common.sh@973 -- # kill 73926 00:22:16.662 13:42:15 ublk -- common/autotest_common.sh@978 -- # wait 73926 00:22:17.226 [2024-11-20 13:42:16.502477] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:17.226 [2024-11-20 13:42:16.502532] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:17.790 00:22:17.790 real 0m24.833s 00:22:17.790 user 0m35.800s 00:22:17.790 sys 0m9.374s 00:22:17.790 13:42:17 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.790 13:42:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:17.790 ************************************ 00:22:17.790 END TEST ublk 00:22:17.790 ************************************ 00:22:18.048 13:42:17 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:18.048 13:42:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:18.048 13:42:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.048 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:22:18.048 ************************************ 00:22:18.048 START TEST ublk_recovery 00:22:18.048 ************************************ 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:18.048 * Looking for test storage... 00:22:18.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.048 13:42:17 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:18.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.048 --rc genhtml_branch_coverage=1 00:22:18.048 --rc genhtml_function_coverage=1 00:22:18.048 --rc genhtml_legend=1 00:22:18.048 --rc geninfo_all_blocks=1 00:22:18.048 --rc geninfo_unexecuted_blocks=1 00:22:18.048 00:22:18.048 ' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:18.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.048 --rc genhtml_branch_coverage=1 00:22:18.048 --rc genhtml_function_coverage=1 00:22:18.048 --rc genhtml_legend=1 00:22:18.048 --rc geninfo_all_blocks=1 00:22:18.048 --rc geninfo_unexecuted_blocks=1 00:22:18.048 00:22:18.048 ' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:18.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.048 --rc genhtml_branch_coverage=1 00:22:18.048 --rc genhtml_function_coverage=1 00:22:18.048 --rc genhtml_legend=1 00:22:18.048 --rc geninfo_all_blocks=1 00:22:18.048 --rc geninfo_unexecuted_blocks=1 00:22:18.048 00:22:18.048 ' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:18.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.048 --rc genhtml_branch_coverage=1 00:22:18.048 --rc genhtml_function_coverage=1 00:22:18.048 --rc genhtml_legend=1 00:22:18.048 --rc geninfo_all_blocks=1 00:22:18.048 --rc geninfo_unexecuted_blocks=1 00:22:18.048 00:22:18.048 ' 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:18.048 13:42:17 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:18.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74310 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:18.048 13:42:17 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74310 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74310 ']' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.048 13:42:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.048 [2024-11-20 13:42:17.431199] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:18.048 [2024-11-20 13:42:17.431324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74310 ] 00:22:18.306 [2024-11-20 13:42:17.599786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.306 [2024-11-20 13:42:17.686057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.306 [2024-11-20 13:42:17.686076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:18.870 13:42:18 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.870 [2024-11-20 13:42:18.275995] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:18.870 [2024-11-20 13:42:18.277674] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.870 13:42:18 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.870 13:42:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.127 malloc0 00:22:19.127 13:42:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.127 13:42:18 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:19.127 13:42:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.127 13:42:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.127 [2024-11-20 13:42:18.379156] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:19.127 [2024-11-20 13:42:18.379285] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:19.127 [2024-11-20 13:42:18.379301] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:19.127 [2024-11-20 13:42:18.379312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:19.127 [2024-11-20 13:42:18.387022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:19.127 [2024-11-20 13:42:18.387056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:19.127 [2024-11-20 13:42:18.395003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:19.127 [2024-11-20 13:42:18.395128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:19.127 [2024-11-20 13:42:18.405061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:19.127 1 00:22:19.127 13:42:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.127 13:42:18 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:20.059 13:42:19 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74345 00:22:20.059 13:42:19 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:20.059 13:42:19 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:20.316 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:20.316 fio-3.35 00:22:20.316 Starting 1 process 00:22:25.579 13:42:24 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74310 00:22:25.579 13:42:24 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:30.931 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74310 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:30.931 13:42:29 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:30.931 13:42:29 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74456 00:22:30.931 13:42:29 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.931 13:42:29 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74456 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74456 ']' 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.931 13:42:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.931 [2024-11-20 13:42:29.507134] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:30.931 [2024-11-20 13:42:29.507258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74456 ] 00:22:30.931 [2024-11-20 13:42:29.666768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:30.931 [2024-11-20 13:42:29.809056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.931 [2024-11-20 13:42:29.809057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:31.189 13:42:30 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.189 [2024-11-20 13:42:30.405996] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:31.189 [2024-11-20 13:42:30.407871] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.189 13:42:30 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.189 malloc0 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.189 13:42:30 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.189 [2024-11-20 13:42:30.510132] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:31.189 [2024-11-20 13:42:30.510171] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:31.189 [2024-11-20 13:42:30.510181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:31.189 [2024-11-20 13:42:30.518029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:31.189 [2024-11-20 13:42:30.518055] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:31.189 [2024-11-20 13:42:30.518063] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:31.189 [2024-11-20 13:42:30.518143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:31.189 1 00:22:31.189 13:42:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.189 13:42:30 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74345 00:22:31.189 [2024-11-20 13:42:30.525994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:31.189 [2024-11-20 13:42:30.532986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:31.189 [2024-11-20 13:42:30.541173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:31.189 [2024-11-20 13:42:30.541196] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:27.445 00:23:27.445 fio_test: (groupid=0, jobs=1): err= 0: pid=74348: Wed Nov 20 13:43:19 2024 00:23:27.445 read: IOPS=25.0k, BW=97.8MiB/s (103MB/s)(5865MiB/60002msec) 00:23:27.445 slat (nsec): min=872, max=539746, avg=5187.63, stdev=2140.03 00:23:27.445 clat (usec): min=758, max=6137.7k, avg=2516.37, stdev=40055.86 00:23:27.445 lat (usec): min=762, max=6137.7k, avg=2521.55, stdev=40055.85 00:23:27.445 clat percentiles (usec): 00:23:27.445 | 1.00th=[ 1713], 5.00th=[ 1860], 10.00th=[ 1909], 20.00th=[ 1942], 00:23:27.445 | 30.00th=[ 1975], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2057], 00:23:27.445 | 70.00th=[ 2147], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 3523], 00:23:27.445 | 99.00th=[ 5866], 99.50th=[ 6390], 99.90th=[ 8160], 99.95th=[ 9241], 00:23:27.445 | 99.99th=[13435] 00:23:27.445 bw ( KiB/s): min= 6776, max=124592, per=100.00%, avg=110183.81, stdev=19578.87, samples=108 00:23:27.445 iops : min= 1694, max=31148, avg=27545.94, stdev=4894.74, samples=108 00:23:27.445 write: IOPS=25.0k, BW=97.6MiB/s (102MB/s)(5858MiB/60002msec); 0 zone resets 00:23:27.445 slat (nsec): min=930, max=668673, avg=5241.75, stdev=2222.77 00:23:27.445 clat (usec): min=593, max=6137.9k, avg=2590.35, stdev=40082.73 00:23:27.445 lat (usec): min=597, max=6137.9k, avg=2595.59, stdev=40082.72 00:23:27.445 clat percentiles (usec): 00:23:27.445 | 1.00th=[ 1745], 5.00th=[ 1942], 10.00th=[ 1991], 20.00th=[ 2040], 00:23:27.445 | 30.00th=[ 2057], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2147], 00:23:27.445 | 70.00th=[ 2245], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 3458], 00:23:27.445 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 8356], 99.95th=[ 9372], 00:23:27.445 | 99.99th=[13566] 00:23:27.445 bw ( KiB/s): min= 6576, max=126576, per=100.00%, avg=110052.53, stdev=19592.09, samples=108 00:23:27.445 iops : min= 1644, max=31644, avg=27513.12, stdev=4898.03, samples=108 00:23:27.445 lat (usec) : 750=0.01%, 1000=0.01% 00:23:27.445 lat (msec) : 2=25.27%, 4=70.97%, 10=3.72%, 20=0.04%, >=2000=0.01% 00:23:27.445 cpu : usr=6.17%, sys=26.45%, ctx=105950, majf=0, minf=13 00:23:27.445 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:27.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:27.445 issued rwts: total=1501564,1499563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:27.445 00:23:27.445 Run status group 0 (all jobs): 00:23:27.445 READ: bw=97.8MiB/s (103MB/s), 97.8MiB/s-97.8MiB/s (103MB/s-103MB/s), io=5865MiB (6150MB), run=60002-60002msec 00:23:27.445 WRITE: bw=97.6MiB/s (102MB/s), 97.6MiB/s-97.6MiB/s (102MB/s-102MB/s), io=5858MiB (6142MB), run=60002-60002msec 00:23:27.445 00:23:27.445 Disk stats (read/write): 00:23:27.445 ublkb1: ios=1498229/1496291, merge=0/0, ticks=3685169/3669151, in_queue=7354320, util=99.90% 00:23:27.445 13:43:19 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.445 [2024-11-20 13:43:19.670452] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:27.445 [2024-11-20 13:43:19.710029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:27.445 [2024-11-20 13:43:19.710184] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:27.445 [2024-11-20 13:43:19.721020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:27.445 [2024-11-20 13:43:19.721139] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:27.445 [2024-11-20 13:43:19.721149] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.445 13:43:19 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.445 [2024-11-20 13:43:19.729084] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:27.445 [2024-11-20 13:43:19.732830] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:27.445 [2024-11-20 13:43:19.732864] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.445 13:43:19 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:27.445 13:43:19 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:27.445 13:43:19 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74456 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74456 ']' 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74456 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74456 00:23:27.445 killing process with pid 74456 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74456' 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74456 00:23:27.445 13:43:19 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74456 00:23:27.445 [2024-11-20 13:43:20.929286] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:27.445 [2024-11-20 13:43:20.929334] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:27.445 ************************************ 00:23:27.445 END TEST ublk_recovery 00:23:27.445 ************************************ 00:23:27.445 00:23:27.445 real 1m5.003s 00:23:27.445 user 1m45.965s 00:23:27.445 sys 0m32.824s 00:23:27.445 13:43:22 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.445 13:43:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.445 13:43:22 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:27.445 13:43:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:27.445 13:43:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.445 13:43:22 -- common/autotest_common.sh@10 -- # set +x 00:23:27.445 13:43:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:27.445 13:43:22 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:27.445 13:43:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.445 13:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.445 13:43:22 -- common/autotest_common.sh@10 -- # set +x 00:23:27.446 ************************************ 00:23:27.446 START TEST ftl 00:23:27.446 ************************************ 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:27.446 * Looking for test storage... 00:23:27.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.446 13:43:22 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.446 13:43:22 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.446 13:43:22 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.446 13:43:22 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.446 13:43:22 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.446 13:43:22 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:27.446 13:43:22 ftl -- scripts/common.sh@345 -- # : 1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.446 13:43:22 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.446 13:43:22 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@353 -- # local d=1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.446 13:43:22 ftl -- scripts/common.sh@355 -- # echo 1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.446 13:43:22 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@353 -- # local d=2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.446 13:43:22 ftl -- scripts/common.sh@355 -- # echo 2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.446 13:43:22 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.446 13:43:22 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.446 13:43:22 ftl -- scripts/common.sh@368 -- # return 0 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.446 --rc genhtml_branch_coverage=1 00:23:27.446 --rc genhtml_function_coverage=1 00:23:27.446 --rc genhtml_legend=1 00:23:27.446 --rc geninfo_all_blocks=1 00:23:27.446 --rc geninfo_unexecuted_blocks=1 00:23:27.446 00:23:27.446 ' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.446 --rc genhtml_branch_coverage=1 00:23:27.446 --rc genhtml_function_coverage=1 00:23:27.446 --rc genhtml_legend=1 00:23:27.446 --rc geninfo_all_blocks=1 00:23:27.446 --rc geninfo_unexecuted_blocks=1 00:23:27.446 00:23:27.446 ' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.446 --rc genhtml_branch_coverage=1 00:23:27.446 --rc genhtml_function_coverage=1 00:23:27.446 --rc genhtml_legend=1 00:23:27.446 --rc geninfo_all_blocks=1 00:23:27.446 --rc geninfo_unexecuted_blocks=1 00:23:27.446 00:23:27.446 ' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.446 --rc genhtml_branch_coverage=1 00:23:27.446 --rc genhtml_function_coverage=1 00:23:27.446 --rc genhtml_legend=1 00:23:27.446 --rc geninfo_all_blocks=1 00:23:27.446 --rc geninfo_unexecuted_blocks=1 00:23:27.446 00:23:27.446 ' 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:27.446 13:43:22 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:27.446 13:43:22 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.446 13:43:22 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.446 13:43:22 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:27.446 13:43:22 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:27.446 13:43:22 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:27.446 13:43:22 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.446 13:43:22 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.446 13:43:22 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:27.446 13:43:22 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:27.446 13:43:22 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:27.446 13:43:22 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:27.446 13:43:22 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.446 13:43:22 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.446 13:43:22 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:27.446 13:43:22 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:27.446 13:43:22 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:27.446 13:43:22 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:27.446 13:43:22 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:27.446 13:43:22 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:27.446 13:43:22 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:27.446 13:43:22 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:27.446 13:43:22 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:27.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:27.446 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.446 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.446 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.446 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75262 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75262 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@835 -- # '[' -z 75262 ']' 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.446 13:43:22 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.446 13:43:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:27.446 [2024-11-20 13:43:22.958034] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:27.446 [2024-11-20 13:43:22.958417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75262 ] 00:23:27.446 [2024-11-20 13:43:23.111103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.446 [2024-11-20 13:43:23.211982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.446 13:43:23 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.446 13:43:23 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:27.446 13:43:23 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:27.446 13:43:24 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:27.446 13:43:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:27.446 13:43:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@50 -- # break 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:27.446 13:43:25 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:27.447 13:43:25 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:27.447 13:43:25 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:27.447 13:43:25 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:27.447 13:43:25 ftl -- ftl/ftl.sh@63 -- # break 00:23:27.447 13:43:25 ftl -- ftl/ftl.sh@66 -- # killprocess 75262 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@954 -- # '[' -z 75262 ']' 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@958 -- # kill -0 75262 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@959 -- # uname 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75262 00:23:27.447 killing process with pid 75262 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75262' 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@973 -- # kill 75262 00:23:27.447 13:43:25 ftl -- common/autotest_common.sh@978 -- # wait 75262 00:23:27.447 13:43:26 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:27.447 13:43:26 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:27.447 13:43:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:27.447 13:43:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.447 13:43:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:27.447 ************************************ 00:23:27.447 START TEST ftl_fio_basic 00:23:27.447 ************************************ 00:23:27.447 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:27.705 * Looking for test storage... 00:23:27.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.705 13:43:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.706 --rc genhtml_branch_coverage=1 00:23:27.706 --rc genhtml_function_coverage=1 00:23:27.706 --rc genhtml_legend=1 00:23:27.706 --rc geninfo_all_blocks=1 00:23:27.706 --rc geninfo_unexecuted_blocks=1 00:23:27.706 00:23:27.706 ' 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.706 --rc genhtml_branch_coverage=1 00:23:27.706 --rc genhtml_function_coverage=1 00:23:27.706 --rc genhtml_legend=1 00:23:27.706 --rc geninfo_all_blocks=1 00:23:27.706 --rc geninfo_unexecuted_blocks=1 00:23:27.706 00:23:27.706 ' 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.706 --rc genhtml_branch_coverage=1 00:23:27.706 --rc genhtml_function_coverage=1 00:23:27.706 --rc genhtml_legend=1 00:23:27.706 --rc geninfo_all_blocks=1 00:23:27.706 --rc geninfo_unexecuted_blocks=1 00:23:27.706 00:23:27.706 ' 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.706 --rc genhtml_branch_coverage=1 00:23:27.706 --rc genhtml_function_coverage=1 00:23:27.706 --rc genhtml_legend=1 00:23:27.706 --rc geninfo_all_blocks=1 00:23:27.706 --rc geninfo_unexecuted_blocks=1 00:23:27.706 00:23:27.706 ' 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:27.706 13:43:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75394 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75394 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75394 ']' 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.706 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:27.706 [2024-11-20 13:43:27.092234] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:27.706 [2024-11-20 13:43:27.092499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75394 ] 00:23:27.964 [2024-11-20 13:43:27.246556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:27.964 [2024-11-20 13:43:27.353540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.964 [2024-11-20 13:43:27.353645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.964 [2024-11-20 13:43:27.353898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:28.898 13:43:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:28.898 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:29.156 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.156 { 00:23:29.156 "name": "nvme0n1", 00:23:29.156 "aliases": [ 00:23:29.156 "ed40df3c-d45f-47cc-8905-8f57059cb4c7" 00:23:29.156 ], 00:23:29.156 "product_name": "NVMe disk", 00:23:29.156 "block_size": 4096, 00:23:29.156 "num_blocks": 1310720, 00:23:29.157 "uuid": "ed40df3c-d45f-47cc-8905-8f57059cb4c7", 00:23:29.157 "numa_id": -1, 00:23:29.157 "assigned_rate_limits": { 00:23:29.157 "rw_ios_per_sec": 0, 00:23:29.157 "rw_mbytes_per_sec": 0, 00:23:29.157 "r_mbytes_per_sec": 0, 00:23:29.157 "w_mbytes_per_sec": 0 00:23:29.157 }, 00:23:29.157 "claimed": false, 00:23:29.157 "zoned": false, 00:23:29.157 "supported_io_types": { 00:23:29.157 "read": true, 00:23:29.157 "write": true, 00:23:29.157 "unmap": true, 00:23:29.157 "flush": true, 00:23:29.157 "reset": true, 00:23:29.157 "nvme_admin": true, 00:23:29.157 "nvme_io": true, 00:23:29.157 "nvme_io_md": false, 00:23:29.157 "write_zeroes": true, 00:23:29.157 "zcopy": false, 00:23:29.157 "get_zone_info": false, 00:23:29.157 "zone_management": false, 00:23:29.157 "zone_append": false, 00:23:29.157 "compare": true, 00:23:29.157 "compare_and_write": false, 00:23:29.157 "abort": true, 00:23:29.157 "seek_hole": false, 00:23:29.157 "seek_data": false, 00:23:29.157 "copy": true, 00:23:29.157 "nvme_iov_md": false 00:23:29.157 }, 00:23:29.157 "driver_specific": { 00:23:29.157 "nvme": [ 00:23:29.157 { 00:23:29.157 "pci_address": "0000:00:11.0", 00:23:29.157 "trid": { 00:23:29.157 "trtype": "PCIe", 00:23:29.157 "traddr": "0000:00:11.0" 00:23:29.157 }, 00:23:29.157 "ctrlr_data": { 00:23:29.157 "cntlid": 0, 00:23:29.157 "vendor_id": "0x1b36", 00:23:29.157 "model_number": "QEMU NVMe Ctrl", 00:23:29.157 "serial_number": "12341", 00:23:29.157 "firmware_revision": "8.0.0", 00:23:29.157 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:29.157 "oacs": { 00:23:29.157 "security": 0, 00:23:29.157 "format": 1, 00:23:29.157 "firmware": 0, 00:23:29.157 "ns_manage": 1 00:23:29.157 }, 00:23:29.157 "multi_ctrlr": false, 00:23:29.157 "ana_reporting": false 00:23:29.157 }, 00:23:29.157 "vs": { 00:23:29.157 "nvme_version": "1.4" 00:23:29.157 }, 00:23:29.157 "ns_data": { 00:23:29.157 "id": 1, 00:23:29.157 "can_share": false 00:23:29.157 } 00:23:29.157 } 00:23:29.157 ], 00:23:29.157 "mp_policy": "active_passive" 00:23:29.157 } 00:23:29.157 } 00:23:29.157 ]' 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:29.157 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:29.415 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:29.415 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:29.674 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5da476b7-d821-4100-9979-d5f8ea6b70e7 00:23:29.674 13:43:28 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5da476b7-d821-4100-9979-d5f8ea6b70e7 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.946 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.947 { 00:23:29.947 "name": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:29.947 "aliases": [ 00:23:29.947 "lvs/nvme0n1p0" 00:23:29.947 ], 00:23:29.947 "product_name": "Logical Volume", 00:23:29.947 "block_size": 4096, 00:23:29.947 "num_blocks": 26476544, 00:23:29.947 "uuid": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:29.947 "assigned_rate_limits": { 00:23:29.947 "rw_ios_per_sec": 0, 00:23:29.947 "rw_mbytes_per_sec": 0, 00:23:29.947 "r_mbytes_per_sec": 0, 00:23:29.947 "w_mbytes_per_sec": 0 00:23:29.947 }, 00:23:29.947 "claimed": false, 00:23:29.947 "zoned": false, 00:23:29.947 "supported_io_types": { 00:23:29.947 "read": true, 00:23:29.947 "write": true, 00:23:29.947 "unmap": true, 00:23:29.947 "flush": false, 00:23:29.947 "reset": true, 00:23:29.947 "nvme_admin": false, 00:23:29.947 "nvme_io": false, 00:23:29.947 "nvme_io_md": false, 00:23:29.947 "write_zeroes": true, 00:23:29.947 "zcopy": false, 00:23:29.947 "get_zone_info": false, 00:23:29.947 "zone_management": false, 00:23:29.947 "zone_append": false, 00:23:29.947 "compare": false, 00:23:29.947 "compare_and_write": false, 00:23:29.947 "abort": false, 00:23:29.947 "seek_hole": true, 00:23:29.947 "seek_data": true, 00:23:29.947 "copy": false, 00:23:29.947 "nvme_iov_md": false 00:23:29.947 }, 00:23:29.947 "driver_specific": { 00:23:29.947 "lvol": { 00:23:29.947 "lvol_store_uuid": "5da476b7-d821-4100-9979-d5f8ea6b70e7", 00:23:29.947 "base_bdev": "nvme0n1", 00:23:29.947 "thin_provision": true, 00:23:29.947 "num_allocated_clusters": 0, 00:23:29.947 "snapshot": false, 00:23:29.947 "clone": false, 00:23:29.947 "esnap_clone": false 00:23:29.947 } 00:23:29.947 } 00:23:29.947 } 00:23:29.947 ]' 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.947 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:30.211 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.469 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:30.469 { 00:23:30.469 "name": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:30.469 "aliases": [ 00:23:30.469 "lvs/nvme0n1p0" 00:23:30.469 ], 00:23:30.469 "product_name": "Logical Volume", 00:23:30.469 "block_size": 4096, 00:23:30.469 "num_blocks": 26476544, 00:23:30.469 "uuid": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:30.469 "assigned_rate_limits": { 00:23:30.469 "rw_ios_per_sec": 0, 00:23:30.469 "rw_mbytes_per_sec": 0, 00:23:30.469 "r_mbytes_per_sec": 0, 00:23:30.469 "w_mbytes_per_sec": 0 00:23:30.469 }, 00:23:30.469 "claimed": false, 00:23:30.469 "zoned": false, 00:23:30.469 "supported_io_types": { 00:23:30.469 "read": true, 00:23:30.469 "write": true, 00:23:30.469 "unmap": true, 00:23:30.469 "flush": false, 00:23:30.469 "reset": true, 00:23:30.469 "nvme_admin": false, 00:23:30.469 "nvme_io": false, 00:23:30.469 "nvme_io_md": false, 00:23:30.469 "write_zeroes": true, 00:23:30.469 "zcopy": false, 00:23:30.469 "get_zone_info": false, 00:23:30.469 "zone_management": false, 00:23:30.469 "zone_append": false, 00:23:30.469 "compare": false, 00:23:30.469 "compare_and_write": false, 00:23:30.469 "abort": false, 00:23:30.469 "seek_hole": true, 00:23:30.469 "seek_data": true, 00:23:30.469 "copy": false, 00:23:30.469 "nvme_iov_md": false 00:23:30.469 }, 00:23:30.469 "driver_specific": { 00:23:30.469 "lvol": { 00:23:30.469 "lvol_store_uuid": "5da476b7-d821-4100-9979-d5f8ea6b70e7", 00:23:30.469 "base_bdev": "nvme0n1", 00:23:30.469 "thin_provision": true, 00:23:30.469 "num_allocated_clusters": 0, 00:23:30.470 "snapshot": false, 00:23:30.470 "clone": false, 00:23:30.470 "esnap_clone": false 00:23:30.470 } 00:23:30.470 } 00:23:30.470 } 00:23:30.470 ]' 00:23:30.470 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:30.470 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:30.470 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:30.728 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:30.728 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:30.728 13:43:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:30.728 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:30.728 13:43:29 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:30.728 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:30.728 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11ceac4c-13a6-4d9e-9b45-6833393d3da1 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:30.987 { 00:23:30.987 "name": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:30.987 "aliases": [ 00:23:30.987 "lvs/nvme0n1p0" 00:23:30.987 ], 00:23:30.987 "product_name": "Logical Volume", 00:23:30.987 "block_size": 4096, 00:23:30.987 "num_blocks": 26476544, 00:23:30.987 "uuid": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:30.987 "assigned_rate_limits": { 00:23:30.987 "rw_ios_per_sec": 0, 00:23:30.987 "rw_mbytes_per_sec": 0, 00:23:30.987 "r_mbytes_per_sec": 0, 00:23:30.987 "w_mbytes_per_sec": 0 00:23:30.987 }, 00:23:30.987 "claimed": false, 00:23:30.987 "zoned": false, 00:23:30.987 "supported_io_types": { 00:23:30.987 "read": true, 00:23:30.987 "write": true, 00:23:30.987 "unmap": true, 00:23:30.987 "flush": false, 00:23:30.987 "reset": true, 00:23:30.987 "nvme_admin": false, 00:23:30.987 "nvme_io": false, 00:23:30.987 "nvme_io_md": false, 00:23:30.987 "write_zeroes": true, 00:23:30.987 "zcopy": false, 00:23:30.987 "get_zone_info": false, 00:23:30.987 "zone_management": false, 00:23:30.987 "zone_append": false, 00:23:30.987 "compare": false, 00:23:30.987 "compare_and_write": false, 00:23:30.987 "abort": false, 00:23:30.987 "seek_hole": true, 00:23:30.987 "seek_data": true, 00:23:30.987 "copy": false, 00:23:30.987 "nvme_iov_md": false 00:23:30.987 }, 00:23:30.987 "driver_specific": { 00:23:30.987 "lvol": { 00:23:30.987 "lvol_store_uuid": "5da476b7-d821-4100-9979-d5f8ea6b70e7", 00:23:30.987 "base_bdev": "nvme0n1", 00:23:30.987 "thin_provision": true, 00:23:30.987 "num_allocated_clusters": 0, 00:23:30.987 "snapshot": false, 00:23:30.987 "clone": false, 00:23:30.987 "esnap_clone": false 00:23:30.987 } 00:23:30.987 } 00:23:30.987 } 00:23:30.987 ]' 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:30.987 13:43:30 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 11ceac4c-13a6-4d9e-9b45-6833393d3da1 -c nvc0n1p0 --l2p_dram_limit 60 00:23:31.248 [2024-11-20 13:43:30.600581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.248 [2024-11-20 13:43:30.600633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:31.248 [2024-11-20 13:43:30.600647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:31.248 [2024-11-20 13:43:30.600654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.248 [2024-11-20 13:43:30.600721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.248 [2024-11-20 13:43:30.600732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.248 [2024-11-20 13:43:30.600740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:31.248 [2024-11-20 13:43:30.600746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.248 [2024-11-20 13:43:30.600769] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:31.248 [2024-11-20 13:43:30.601421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:31.248 [2024-11-20 13:43:30.601441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.601448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.249 [2024-11-20 13:43:30.601457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:23:31.249 [2024-11-20 13:43:30.601463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.601564] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 46db9bda-b709-447a-9ba4-fb9abf515a7f 00:23:31.249 [2024-11-20 13:43:30.602574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.602606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:31.249 [2024-11-20 13:43:30.602616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:31.249 [2024-11-20 13:43:30.602623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.607422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.607573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.249 [2024-11-20 13:43:30.607587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:23:31.249 [2024-11-20 13:43:30.607595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.607687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.607696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.249 [2024-11-20 13:43:30.607704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:31.249 [2024-11-20 13:43:30.607714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.607765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.607774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:31.249 [2024-11-20 13:43:30.607781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:31.249 [2024-11-20 13:43:30.607788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.607814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.249 [2024-11-20 13:43:30.610778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.610880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.249 [2024-11-20 13:43:30.610897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:23:31.249 [2024-11-20 13:43:30.610905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.610939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.610945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:31.249 [2024-11-20 13:43:30.610954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:31.249 [2024-11-20 13:43:30.610960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.611004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:31.249 [2024-11-20 13:43:30.611127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:31.249 [2024-11-20 13:43:30.611139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:31.249 [2024-11-20 13:43:30.611148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:31.249 [2024-11-20 13:43:30.611157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611165] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:31.249 [2024-11-20 13:43:30.611178] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:31.249 [2024-11-20 13:43:30.611186] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:31.249 [2024-11-20 13:43:30.611192] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:31.249 [2024-11-20 13:43:30.611199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.611206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:31.249 [2024-11-20 13:43:30.611215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:31.249 [2024-11-20 13:43:30.611220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.611296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.249 [2024-11-20 13:43:30.611303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:31.249 [2024-11-20 13:43:30.611310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:31.249 [2024-11-20 13:43:30.611316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.249 [2024-11-20 13:43:30.611408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:31.249 [2024-11-20 13:43:30.611415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:31.249 [2024-11-20 13:43:30.611424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:31.249 [2024-11-20 13:43:30.611443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:31.249 [2024-11-20 13:43:30.611462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.249 [2024-11-20 13:43:30.611474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:31.249 [2024-11-20 13:43:30.611479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:31.249 [2024-11-20 13:43:30.611486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.249 [2024-11-20 13:43:30.611492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:31.249 [2024-11-20 13:43:30.611498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:31.249 [2024-11-20 13:43:30.611504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:31.249 [2024-11-20 13:43:30.611518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:31.249 [2024-11-20 13:43:30.611538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:31.249 [2024-11-20 13:43:30.611556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:31.249 [2024-11-20 13:43:30.611574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:31.249 [2024-11-20 13:43:30.611591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:31.249 [2024-11-20 13:43:30.611611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.249 [2024-11-20 13:43:30.611622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:31.249 [2024-11-20 13:43:30.611638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:31.249 [2024-11-20 13:43:30.611644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.249 [2024-11-20 13:43:30.611649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:31.249 [2024-11-20 13:43:30.611656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:31.249 [2024-11-20 13:43:30.611661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:31.249 [2024-11-20 13:43:30.611672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:31.249 [2024-11-20 13:43:30.611680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611685] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:31.249 [2024-11-20 13:43:30.611692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:31.249 [2024-11-20 13:43:30.611698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.249 [2024-11-20 13:43:30.611712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:31.249 [2024-11-20 13:43:30.611720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:31.249 [2024-11-20 13:43:30.611726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:31.249 [2024-11-20 13:43:30.611733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:31.249 [2024-11-20 13:43:30.611738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:31.249 [2024-11-20 13:43:30.611745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:31.249 [2024-11-20 13:43:30.611753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:31.249 [2024-11-20 13:43:30.611762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:31.250 [2024-11-20 13:43:30.611776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:31.250 [2024-11-20 13:43:30.611786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:31.250 [2024-11-20 13:43:30.611793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:31.250 [2024-11-20 13:43:30.611799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:31.250 [2024-11-20 13:43:30.611806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:31.250 [2024-11-20 13:43:30.611811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:31.250 [2024-11-20 13:43:30.611819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:31.250 [2024-11-20 13:43:30.611824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:31.250 [2024-11-20 13:43:30.611833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:31.250 [2024-11-20 13:43:30.611864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:31.250 [2024-11-20 13:43:30.611872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:31.250 [2024-11-20 13:43:30.611886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:31.250 [2024-11-20 13:43:30.611892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:31.250 [2024-11-20 13:43:30.611900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:31.250 [2024-11-20 13:43:30.611905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.250 [2024-11-20 13:43:30.611912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:31.250 [2024-11-20 13:43:30.611918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:23:31.250 [2024-11-20 13:43:30.611925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.250 [2024-11-20 13:43:30.611997] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:31.250 [2024-11-20 13:43:30.612009] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:35.434 [2024-11-20 13:43:34.540345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.434 [2024-11-20 13:43:34.540551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:35.434 [2024-11-20 13:43:34.540624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3928.332 ms 00:23:35.434 [2024-11-20 13:43:34.540653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.434 [2024-11-20 13:43:34.565751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.434 [2024-11-20 13:43:34.565927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.434 [2024-11-20 13:43:34.566048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.879 ms 00:23:35.434 [2024-11-20 13:43:34.566077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.434 [2024-11-20 13:43:34.566235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.434 [2024-11-20 13:43:34.566270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.434 [2024-11-20 13:43:34.566332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:35.434 [2024-11-20 13:43:34.566390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.434 [2024-11-20 13:43:34.612097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.434 [2024-11-20 13:43:34.612328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.434 [2024-11-20 13:43:34.612397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.636 ms 00:23:35.435 [2024-11-20 13:43:34.612425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.612524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.612552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.435 [2024-11-20 13:43:34.612573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:35.435 [2024-11-20 13:43:34.612628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.613043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.613091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.435 [2024-11-20 13:43:34.613163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:35.435 [2024-11-20 13:43:34.613188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.613331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.613440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.435 [2024-11-20 13:43:34.613491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:23:35.435 [2024-11-20 13:43:34.613516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.627724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.627852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.435 [2024-11-20 13:43:34.627933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.168 ms 00:23:35.435 [2024-11-20 13:43:34.627961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.639297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:35.435 [2024-11-20 13:43:34.653707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.653868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:35.435 [2024-11-20 13:43:34.653924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:23:35.435 [2024-11-20 13:43:34.653947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.713183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.713375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:35.435 [2024-11-20 13:43:34.713446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.167 ms 00:23:35.435 [2024-11-20 13:43:34.713470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.713670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.713706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:35.435 [2024-11-20 13:43:34.713782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:23:35.435 [2024-11-20 13:43:34.713805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.736731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.736899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:35.435 [2024-11-20 13:43:34.737049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.857 ms 00:23:35.435 [2024-11-20 13:43:34.737075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.759310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.759458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:35.435 [2024-11-20 13:43:34.759541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.176 ms 00:23:35.435 [2024-11-20 13:43:34.759562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.760154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.760192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.435 [2024-11-20 13:43:34.760216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:35.435 [2024-11-20 13:43:34.760266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.832653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.832854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:35.435 [2024-11-20 13:43:34.833023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.313 ms 00:23:35.435 [2024-11-20 13:43:34.833049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.435 [2024-11-20 13:43:34.857432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.435 [2024-11-20 13:43:34.857602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:35.435 [2024-11-20 13:43:34.857623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.217 ms 00:23:35.435 [2024-11-20 13:43:34.857632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-20 13:43:34.881366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-20 13:43:34.881417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:35.692 [2024-11-20 13:43:34.881431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.682 ms 00:23:35.692 [2024-11-20 13:43:34.881438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-20 13:43:34.904466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-20 13:43:34.904526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:35.692 [2024-11-20 13:43:34.904540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.970 ms 00:23:35.692 [2024-11-20 13:43:34.904548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-20 13:43:34.904601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-20 13:43:34.904609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:35.692 [2024-11-20 13:43:34.904625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:35.692 [2024-11-20 13:43:34.904633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-20 13:43:34.904727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-20 13:43:34.904736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:35.692 [2024-11-20 13:43:34.904746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:35.692 [2024-11-20 13:43:34.904753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-20 13:43:34.905770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4304.754 ms, result 0 00:23:35.692 { 00:23:35.692 "name": "ftl0", 00:23:35.692 "uuid": "46db9bda-b709-447a-9ba4-fb9abf515a7f" 00:23:35.692 } 00:23:35.692 13:43:34 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:35.693 13:43:34 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:35.950 13:43:35 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:35.950 [ 00:23:35.950 { 00:23:35.950 "name": "ftl0", 00:23:35.950 "aliases": [ 00:23:35.950 "46db9bda-b709-447a-9ba4-fb9abf515a7f" 00:23:35.950 ], 00:23:35.950 "product_name": "FTL disk", 00:23:35.950 "block_size": 4096, 00:23:35.950 "num_blocks": 20971520, 00:23:35.950 "uuid": "46db9bda-b709-447a-9ba4-fb9abf515a7f", 00:23:35.950 "assigned_rate_limits": { 00:23:35.950 "rw_ios_per_sec": 0, 00:23:35.950 "rw_mbytes_per_sec": 0, 00:23:35.950 "r_mbytes_per_sec": 0, 00:23:35.950 "w_mbytes_per_sec": 0 00:23:35.950 }, 00:23:35.950 "claimed": false, 00:23:35.950 "zoned": false, 00:23:35.950 "supported_io_types": { 00:23:35.950 "read": true, 00:23:35.950 "write": true, 00:23:35.950 "unmap": true, 00:23:35.950 "flush": true, 00:23:35.950 "reset": false, 00:23:35.950 "nvme_admin": false, 00:23:35.950 "nvme_io": false, 00:23:35.950 "nvme_io_md": false, 00:23:35.950 "write_zeroes": true, 00:23:35.950 "zcopy": false, 00:23:35.950 "get_zone_info": false, 00:23:35.950 "zone_management": false, 00:23:35.950 "zone_append": false, 00:23:35.950 "compare": false, 00:23:35.950 "compare_and_write": false, 00:23:35.950 "abort": false, 00:23:35.950 "seek_hole": false, 00:23:35.950 "seek_data": false, 00:23:35.950 "copy": false, 00:23:35.950 "nvme_iov_md": false 00:23:35.950 }, 00:23:35.950 "driver_specific": { 00:23:35.950 "ftl": { 00:23:35.950 "base_bdev": "11ceac4c-13a6-4d9e-9b45-6833393d3da1", 00:23:35.950 "cache": "nvc0n1p0" 00:23:35.950 } 00:23:35.950 } 00:23:35.950 } 00:23:35.950 ] 00:23:35.950 13:43:35 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:35.950 13:43:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:35.950 13:43:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:36.207 13:43:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:36.207 13:43:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:36.467 [2024-11-20 13:43:35.742573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.742776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.467 [2024-11-20 13:43:35.742795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:36.467 [2024-11-20 13:43:35.742808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.742838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.467 [2024-11-20 13:43:35.745445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.745476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.467 [2024-11-20 13:43:35.745488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:23:36.467 [2024-11-20 13:43:35.745497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.745937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.745950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.467 [2024-11-20 13:43:35.745961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:23:36.467 [2024-11-20 13:43:35.745976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.749221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.749242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.467 [2024-11-20 13:43:35.749256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:23:36.467 [2024-11-20 13:43:35.749265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.755389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.755418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.467 [2024-11-20 13:43:35.755430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:23:36.467 [2024-11-20 13:43:35.755437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.779016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.779065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.467 [2024-11-20 13:43:35.779079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.474 ms 00:23:36.467 [2024-11-20 13:43:35.779087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.793399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.793445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.467 [2024-11-20 13:43:35.793462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.237 ms 00:23:36.467 [2024-11-20 13:43:35.793472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.793673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.793684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.467 [2024-11-20 13:43:35.793694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:36.467 [2024-11-20 13:43:35.793701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.816742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.816932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:36.467 [2024-11-20 13:43:35.816953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:23:36.467 [2024-11-20 13:43:35.816960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.840324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.840372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:36.467 [2024-11-20 13:43:35.840385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.298 ms 00:23:36.467 [2024-11-20 13:43:35.840393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.863073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.863263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.467 [2024-11-20 13:43:35.863283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.614 ms 00:23:36.467 [2024-11-20 13:43:35.863291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.885948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.467 [2024-11-20 13:43:35.886003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.467 [2024-11-20 13:43:35.886018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.540 ms 00:23:36.467 [2024-11-20 13:43:35.886026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.467 [2024-11-20 13:43:35.886079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.467 [2024-11-20 13:43:35.886094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.467 [2024-11-20 13:43:35.886295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.468 [2024-11-20 13:43:35.886960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.468 [2024-11-20 13:43:35.887244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46db9bda-b709-447a-9ba4-fb9abf515a7f 00:23:36.468 [2024-11-20 13:43:35.887290] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.468 [2024-11-20 13:43:35.887316] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.468 [2024-11-20 13:43:35.887337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.468 [2024-11-20 13:43:35.887358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.468 [2024-11-20 13:43:35.887377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.468 [2024-11-20 13:43:35.887510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.468 [2024-11-20 13:43:35.887532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.469 [2024-11-20 13:43:35.887552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.469 [2024-11-20 13:43:35.887570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.469 [2024-11-20 13:43:35.887591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.469 [2024-11-20 13:43:35.887609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.469 [2024-11-20 13:43:35.887679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:23:36.469 [2024-11-20 13:43:35.887701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.900267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.727 [2024-11-20 13:43:35.900423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.727 [2024-11-20 13:43:35.900474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.499 ms 00:23:36.727 [2024-11-20 13:43:35.900496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.900893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.727 [2024-11-20 13:43:35.900985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.727 [2024-11-20 13:43:35.901073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:36.727 [2024-11-20 13:43:35.901097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.944673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:35.944839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.727 [2024-11-20 13:43:35.944917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:35.944929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.945015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:35.945024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.727 [2024-11-20 13:43:35.945034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:35.945041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.945152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:35.945165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.727 [2024-11-20 13:43:35.945175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:35.945182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:35.945209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:35.945217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.727 [2024-11-20 13:43:35.945226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:35.945233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.027414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:36.027466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.727 [2024-11-20 13:43:36.027479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:36.027487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.090835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:36.090884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.727 [2024-11-20 13:43:36.090897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:36.090905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.091004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:36.091015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.727 [2024-11-20 13:43:36.091027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:36.091034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.091115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:36.091124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.727 [2024-11-20 13:43:36.091134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:36.091141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.091241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.727 [2024-11-20 13:43:36.091251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.727 [2024-11-20 13:43:36.091262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.727 [2024-11-20 13:43:36.091269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.727 [2024-11-20 13:43:36.091324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.728 [2024-11-20 13:43:36.091333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.728 [2024-11-20 13:43:36.091342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.728 [2024-11-20 13:43:36.091349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.728 [2024-11-20 13:43:36.091390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.728 [2024-11-20 13:43:36.091398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.728 [2024-11-20 13:43:36.091407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.728 [2024-11-20 13:43:36.091416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.728 [2024-11-20 13:43:36.091464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.728 [2024-11-20 13:43:36.091473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.728 [2024-11-20 13:43:36.091483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.728 [2024-11-20 13:43:36.091490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.728 [2024-11-20 13:43:36.091634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.040 ms, result 0 00:23:36.728 true 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75394 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75394 ']' 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75394 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75394 00:23:36.728 killing process with pid 75394 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75394' 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75394 00:23:36.728 13:43:36 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75394 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.835 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:54.836 13:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.836 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:54.836 fio-3.35 00:23:54.836 Starting 1 thread 00:23:57.365 00:23:57.365 test: (groupid=0, jobs=1): err= 0: pid=75600: Wed Nov 20 13:43:56 2024 00:23:57.365 read: IOPS=1296, BW=86.1MiB/s (90.3MB/s)(255MiB/2957msec) 00:23:57.365 slat (usec): min=3, max=102, avg= 4.83, stdev= 2.68 00:23:57.365 clat (usec): min=228, max=1070, avg=331.13, stdev=39.18 00:23:57.365 lat (usec): min=245, max=1081, avg=335.97, stdev=40.11 00:23:57.365 clat percentiles (usec): 00:23:57.365 | 1.00th=[ 255], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 318], 00:23:57.365 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 326], 00:23:57.365 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 420], 00:23:57.365 | 99.00th=[ 465], 99.50th=[ 529], 99.90th=[ 701], 99.95th=[ 824], 00:23:57.365 | 99.99th=[ 1074] 00:23:57.365 write: IOPS=1305, BW=86.7MiB/s (90.9MB/s)(256MiB/2954msec); 0 zone resets 00:23:57.365 slat (nsec): min=13978, max=68003, avg=19912.10, stdev=3553.20 00:23:57.365 clat (usec): min=287, max=6917, avg=399.84, stdev=365.63 00:23:57.365 lat (usec): min=307, max=6945, avg=419.75, stdev=365.76 00:23:57.365 clat percentiles (usec): 00:23:57.365 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 343], 00:23:57.365 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 351], 00:23:57.365 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 412], 95.00th=[ 453], 00:23:57.365 | 99.00th=[ 1844], 99.50th=[ 3163], 99.90th=[ 5669], 99.95th=[ 5932], 00:23:57.365 | 99.99th=[ 6915] 00:23:57.365 bw ( KiB/s): min=85816, max=90848, per=99.99%, avg=88753.60, stdev=1958.12, samples=5 00:23:57.365 iops : min= 1262, max= 1336, avg=1305.20, stdev=28.80, samples=5 00:23:57.365 lat (usec) : 250=0.12%, 500=97.54%, 750=1.47%, 1000=0.23% 00:23:57.365 lat (msec) : 2=0.14%, 4=0.33%, 10=0.17% 00:23:57.365 cpu : usr=99.12%, sys=0.24%, ctx=7, majf=0, minf=1169 00:23:57.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:57.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.366 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:57.366 00:23:57.366 Run status group 0 (all jobs): 00:23:57.366 READ: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=255MiB (267MB), run=2957-2957msec 00:23:57.366 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=256MiB (269MB), run=2954-2954msec 00:23:59.265 ----------------------------------------------------- 00:23:59.265 Suppressions used: 00:23:59.265 count bytes template 00:23:59.265 1 5 /usr/src/fio/parse.c 00:23:59.265 1 8 libtcmalloc_minimal.so 00:23:59.265 1 904 libcrypto.so 00:23:59.265 ----------------------------------------------------- 00:23:59.265 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.265 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.523 13:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.524 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:59.524 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:59.524 fio-3.35 00:23:59.524 Starting 2 threads 00:24:26.176 00:24:26.176 first_half: (groupid=0, jobs=1): err= 0: pid=75694: Wed Nov 20 13:44:22 2024 00:24:26.176 read: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(255MiB/22371msec) 00:24:26.176 slat (nsec): min=3088, max=17611, avg=3777.40, stdev=581.43 00:24:26.176 clat (usec): min=617, max=240532, avg=33196.76, stdev=16589.89 00:24:26.176 lat (usec): min=621, max=240536, avg=33200.54, stdev=16589.92 00:24:26.176 clat percentiles (msec): 00:24:26.176 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 30], 00:24:26.176 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:24:26.176 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 44], 00:24:26.176 | 99.00th=[ 126], 99.50th=[ 148], 99.90th=[ 194], 99.95th=[ 209], 00:24:26.176 | 99.99th=[ 234] 00:24:26.176 write: IOPS=3450, BW=13.5MiB/s (14.1MB/s)(256MiB/18992msec); 0 zone resets 00:24:26.176 slat (usec): min=3, max=160, avg= 5.39, stdev= 2.37 00:24:26.176 clat (usec): min=348, max=88296, avg=10605.33, stdev=17506.55 00:24:26.176 lat (usec): min=357, max=88301, avg=10610.72, stdev=17506.62 00:24:26.176 clat percentiles (usec): 00:24:26.176 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 889], 20.00th=[ 1221], 00:24:26.176 | 30.00th=[ 2737], 40.00th=[ 3752], 50.00th=[ 4817], 60.00th=[ 5342], 00:24:26.176 | 70.00th=[ 5997], 80.00th=[10421], 90.00th=[30016], 95.00th=[62129], 00:24:26.176 | 99.00th=[69731], 99.50th=[71828], 99.90th=[78119], 99.95th=[81265], 00:24:26.176 | 99.99th=[87557] 00:24:26.176 bw ( KiB/s): min= 416, max=42984, per=86.33%, avg=23831.27, stdev=13492.59, samples=22 00:24:26.176 iops : min= 104, max=10746, avg=5957.82, stdev=3373.15, samples=22 00:24:26.176 lat (usec) : 500=0.02%, 750=2.36%, 1000=4.59% 00:24:26.176 lat (msec) : 2=5.80%, 4=8.51%, 10=19.96%, 20=5.22%, 50=47.06% 00:24:26.176 lat (msec) : 100=5.68%, 250=0.80% 00:24:26.176 cpu : usr=99.44%, sys=0.08%, ctx=33, majf=0, minf=5584 00:24:26.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:26.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.176 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.176 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.176 second_half: (groupid=0, jobs=1): err= 0: pid=75695: Wed Nov 20 13:44:22 2024 00:24:26.176 read: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(254MiB/22222msec) 00:24:26.176 slat (nsec): min=3067, max=25007, avg=3869.00, stdev=739.94 00:24:26.176 clat (usec): min=643, max=161614, avg=33643.06, stdev=14475.69 00:24:26.176 lat (usec): min=646, max=161618, avg=33646.93, stdev=14475.70 00:24:26.176 clat percentiles (msec): 00:24:26.176 | 1.00th=[ 4], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:24:26.176 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:24:26.176 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 46], 00:24:26.176 | 99.00th=[ 112], 99.50th=[ 131], 99.90th=[ 150], 99.95th=[ 155], 00:24:26.176 | 99.99th=[ 157] 00:24:26.176 write: IOPS=4579, BW=17.9MiB/s (18.8MB/s)(256MiB/14312msec); 0 zone resets 00:24:26.176 slat (usec): min=3, max=388, avg= 5.59, stdev= 3.21 00:24:26.176 clat (usec): min=363, max=88002, avg=9940.30, stdev=16955.10 00:24:26.176 lat (usec): min=370, max=88007, avg=9945.88, stdev=16955.11 00:24:26.176 clat percentiles (usec): 00:24:26.176 | 1.00th=[ 668], 5.00th=[ 783], 10.00th=[ 914], 20.00th=[ 1106], 00:24:26.176 | 30.00th=[ 1401], 40.00th=[ 3359], 50.00th=[ 4752], 60.00th=[ 5473], 00:24:26.176 | 70.00th=[ 6587], 80.00th=[10421], 90.00th=[16909], 95.00th=[61080], 00:24:26.176 | 99.00th=[69731], 99.50th=[71828], 99.90th=[78119], 99.95th=[80217], 00:24:26.176 | 99.99th=[87557] 00:24:26.176 bw ( KiB/s): min= 5880, max=41568, per=100.00%, avg=30840.47, stdev=11100.66, samples=17 00:24:26.176 iops : min= 1470, max=10392, avg=7710.12, stdev=2775.17, samples=17 00:24:26.176 lat (usec) : 500=0.03%, 750=1.85%, 1000=5.41% 00:24:26.176 lat (msec) : 2=10.05%, 4=5.16%, 10=17.61%, 20=6.47%, 50=46.73% 00:24:26.176 lat (msec) : 100=5.92%, 250=0.77% 00:24:26.176 cpu : usr=99.33%, sys=0.08%, ctx=46, majf=0, minf=5535 00:24:26.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.176 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.176 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.176 00:24:26.176 Run status group 0 (all jobs): 00:24:26.176 READ: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=509MiB (534MB), run=22222-22371msec 00:24:26.176 WRITE: bw=27.0MiB/s (28.3MB/s), 13.5MiB/s-17.9MiB/s (14.1MB/s-18.8MB/s), io=512MiB (537MB), run=14312-18992msec 00:24:26.176 ----------------------------------------------------- 00:24:26.176 Suppressions used: 00:24:26.176 count bytes template 00:24:26.176 2 10 /usr/src/fio/parse.c 00:24:26.176 3 288 /usr/src/fio/iolog.c 00:24:26.176 1 8 libtcmalloc_minimal.so 00:24:26.176 1 904 libcrypto.so 00:24:26.176 ----------------------------------------------------- 00:24:26.176 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:26.176 13:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:26.177 13:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.177 13:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.177 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:26.177 fio-3.35 00:24:26.177 Starting 1 thread 00:24:38.502 00:24:38.502 test: (groupid=0, jobs=1): err= 0: pid=75989: Wed Nov 20 13:44:37 2024 00:24:38.502 read: IOPS=8059, BW=31.5MiB/s (33.0MB/s)(255MiB/8090msec) 00:24:38.502 slat (usec): min=3, max=162, avg= 4.32, stdev= 1.35 00:24:38.502 clat (usec): min=540, max=30918, avg=15872.76, stdev=1777.60 00:24:38.502 lat (usec): min=544, max=30923, avg=15877.08, stdev=1777.69 00:24:38.502 clat percentiles (usec): 00:24:38.502 | 1.00th=[13698], 5.00th=[14222], 10.00th=[14746], 20.00th=[14877], 00:24:38.502 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:24:38.502 | 70.00th=[15795], 80.00th=[16057], 90.00th=[17433], 95.00th=[20055], 00:24:38.502 | 99.00th=[22938], 99.50th=[23462], 99.90th=[25822], 99.95th=[27132], 00:24:38.502 | 99.99th=[30278] 00:24:38.502 write: IOPS=16.1k, BW=62.8MiB/s (65.8MB/s)(256MiB/4079msec); 0 zone resets 00:24:38.502 slat (usec): min=4, max=805, avg= 6.61, stdev= 5.53 00:24:38.502 clat (usec): min=499, max=45845, avg=7922.84, stdev=9780.81 00:24:38.502 lat (usec): min=504, max=45851, avg=7929.44, stdev=9780.78 00:24:38.502 clat percentiles (usec): 00:24:38.502 | 1.00th=[ 635], 5.00th=[ 766], 10.00th=[ 873], 20.00th=[ 996], 00:24:38.502 | 30.00th=[ 1106], 40.00th=[ 1450], 50.00th=[ 5473], 60.00th=[ 6259], 00:24:38.502 | 70.00th=[ 7177], 80.00th=[ 8455], 90.00th=[28705], 95.00th=[30016], 00:24:38.502 | 99.00th=[32375], 99.50th=[34341], 99.90th=[42206], 99.95th=[42730], 00:24:38.502 | 99.99th=[44827] 00:24:38.502 bw ( KiB/s): min= 7832, max=85424, per=90.62%, avg=58239.78, stdev=21482.65, samples=9 00:24:38.502 iops : min= 1958, max=21356, avg=14559.89, stdev=5370.64, samples=9 00:24:38.502 lat (usec) : 500=0.01%, 750=2.25%, 1000=8.16% 00:24:38.502 lat (msec) : 2=10.18%, 4=0.63%, 10=20.66%, 20=47.59%, 50=10.52% 00:24:38.502 cpu : usr=98.36%, sys=0.51%, ctx=28, majf=0, minf=5565 00:24:38.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:38.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.502 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.502 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.502 00:24:38.502 Run status group 0 (all jobs): 00:24:38.502 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=255MiB (267MB), run=8090-8090msec 00:24:38.502 WRITE: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=256MiB (268MB), run=4079-4079msec 00:24:39.436 ----------------------------------------------------- 00:24:39.436 Suppressions used: 00:24:39.436 count bytes template 00:24:39.436 1 5 /usr/src/fio/parse.c 00:24:39.436 2 192 /usr/src/fio/iolog.c 00:24:39.436 1 8 libtcmalloc_minimal.so 00:24:39.436 1 904 libcrypto.so 00:24:39.436 ----------------------------------------------------- 00:24:39.436 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.436 Remove shared memory files 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57253 /dev/shm/spdk_tgt_trace.pid74310 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:39.436 ************************************ 00:24:39.436 END TEST ftl_fio_basic 00:24:39.436 ************************************ 00:24:39.436 00:24:39.436 real 1m11.995s 00:24:39.436 user 2m23.406s 00:24:39.436 sys 0m12.534s 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.436 13:44:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:39.694 13:44:38 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:39.694 13:44:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:39.694 13:44:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.694 13:44:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:39.694 ************************************ 00:24:39.694 START TEST ftl_bdevperf 00:24:39.694 ************************************ 00:24:39.694 13:44:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:39.694 * Looking for test storage... 00:24:39.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.694 13:44:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.694 13:44:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.694 13:44:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.694 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.695 --rc genhtml_branch_coverage=1 00:24:39.695 --rc genhtml_function_coverage=1 00:24:39.695 --rc genhtml_legend=1 00:24:39.695 --rc geninfo_all_blocks=1 00:24:39.695 --rc geninfo_unexecuted_blocks=1 00:24:39.695 00:24:39.695 ' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.695 --rc genhtml_branch_coverage=1 00:24:39.695 --rc genhtml_function_coverage=1 00:24:39.695 --rc genhtml_legend=1 00:24:39.695 --rc geninfo_all_blocks=1 00:24:39.695 --rc geninfo_unexecuted_blocks=1 00:24:39.695 00:24:39.695 ' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.695 --rc genhtml_branch_coverage=1 00:24:39.695 --rc genhtml_function_coverage=1 00:24:39.695 --rc genhtml_legend=1 00:24:39.695 --rc geninfo_all_blocks=1 00:24:39.695 --rc geninfo_unexecuted_blocks=1 00:24:39.695 00:24:39.695 ' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.695 --rc genhtml_branch_coverage=1 00:24:39.695 --rc genhtml_function_coverage=1 00:24:39.695 --rc genhtml_legend=1 00:24:39.695 --rc geninfo_all_blocks=1 00:24:39.695 --rc geninfo_unexecuted_blocks=1 00:24:39.695 00:24:39.695 ' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76211 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76211 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76211 ']' 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.695 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:39.695 [2024-11-20 13:44:39.103539] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:39.695 [2024-11-20 13:44:39.103797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76211 ] 00:24:39.953 [2024-11-20 13:44:39.261793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.953 [2024-11-20 13:44:39.362085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:40.886 13:44:39 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:40.886 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:41.144 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.144 { 00:24:41.144 "name": "nvme0n1", 00:24:41.144 "aliases": [ 00:24:41.144 "6d2d5178-d135-41a1-8294-60611f0d9de0" 00:24:41.144 ], 00:24:41.144 "product_name": "NVMe disk", 00:24:41.144 "block_size": 4096, 00:24:41.144 "num_blocks": 1310720, 00:24:41.144 "uuid": "6d2d5178-d135-41a1-8294-60611f0d9de0", 00:24:41.144 "numa_id": -1, 00:24:41.144 "assigned_rate_limits": { 00:24:41.144 "rw_ios_per_sec": 0, 00:24:41.144 "rw_mbytes_per_sec": 0, 00:24:41.144 "r_mbytes_per_sec": 0, 00:24:41.144 "w_mbytes_per_sec": 0 00:24:41.144 }, 00:24:41.144 "claimed": true, 00:24:41.144 "claim_type": "read_many_write_one", 00:24:41.144 "zoned": false, 00:24:41.144 "supported_io_types": { 00:24:41.144 "read": true, 00:24:41.144 "write": true, 00:24:41.144 "unmap": true, 00:24:41.144 "flush": true, 00:24:41.144 "reset": true, 00:24:41.144 "nvme_admin": true, 00:24:41.144 "nvme_io": true, 00:24:41.144 "nvme_io_md": false, 00:24:41.144 "write_zeroes": true, 00:24:41.144 "zcopy": false, 00:24:41.144 "get_zone_info": false, 00:24:41.145 "zone_management": false, 00:24:41.145 "zone_append": false, 00:24:41.145 "compare": true, 00:24:41.145 "compare_and_write": false, 00:24:41.145 "abort": true, 00:24:41.145 "seek_hole": false, 00:24:41.145 "seek_data": false, 00:24:41.145 "copy": true, 00:24:41.145 "nvme_iov_md": false 00:24:41.145 }, 00:24:41.145 "driver_specific": { 00:24:41.145 "nvme": [ 00:24:41.145 { 00:24:41.145 "pci_address": "0000:00:11.0", 00:24:41.145 "trid": { 00:24:41.145 "trtype": "PCIe", 00:24:41.145 "traddr": "0000:00:11.0" 00:24:41.145 }, 00:24:41.145 "ctrlr_data": { 00:24:41.145 "cntlid": 0, 00:24:41.145 "vendor_id": "0x1b36", 00:24:41.145 "model_number": "QEMU NVMe Ctrl", 00:24:41.145 "serial_number": "12341", 00:24:41.145 "firmware_revision": "8.0.0", 00:24:41.145 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:41.145 "oacs": { 00:24:41.145 "security": 0, 00:24:41.145 "format": 1, 00:24:41.145 "firmware": 0, 00:24:41.145 "ns_manage": 1 00:24:41.145 }, 00:24:41.145 "multi_ctrlr": false, 00:24:41.145 "ana_reporting": false 00:24:41.145 }, 00:24:41.145 "vs": { 00:24:41.145 "nvme_version": "1.4" 00:24:41.145 }, 00:24:41.145 "ns_data": { 00:24:41.145 "id": 1, 00:24:41.145 "can_share": false 00:24:41.145 } 00:24:41.145 } 00:24:41.145 ], 00:24:41.145 "mp_policy": "active_passive" 00:24:41.145 } 00:24:41.145 } 00:24:41.145 ]' 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:41.145 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:41.403 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5da476b7-d821-4100-9979-d5f8ea6b70e7 00:24:41.403 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:41.403 13:44:40 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5da476b7-d821-4100-9979-d5f8ea6b70e7 00:24:41.661 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:41.920 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c58c273b-d6f4-4559-9b89-26eee21e7fa3 00:24:41.920 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c58c273b-d6f4-4559-9b89-26eee21e7fa3 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:42.177 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.435 { 00:24:42.435 "name": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:42.435 "aliases": [ 00:24:42.435 "lvs/nvme0n1p0" 00:24:42.435 ], 00:24:42.435 "product_name": "Logical Volume", 00:24:42.435 "block_size": 4096, 00:24:42.435 "num_blocks": 26476544, 00:24:42.435 "uuid": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:42.435 "assigned_rate_limits": { 00:24:42.435 "rw_ios_per_sec": 0, 00:24:42.435 "rw_mbytes_per_sec": 0, 00:24:42.435 "r_mbytes_per_sec": 0, 00:24:42.435 "w_mbytes_per_sec": 0 00:24:42.435 }, 00:24:42.435 "claimed": false, 00:24:42.435 "zoned": false, 00:24:42.435 "supported_io_types": { 00:24:42.435 "read": true, 00:24:42.435 "write": true, 00:24:42.435 "unmap": true, 00:24:42.435 "flush": false, 00:24:42.435 "reset": true, 00:24:42.435 "nvme_admin": false, 00:24:42.435 "nvme_io": false, 00:24:42.435 "nvme_io_md": false, 00:24:42.435 "write_zeroes": true, 00:24:42.435 "zcopy": false, 00:24:42.435 "get_zone_info": false, 00:24:42.435 "zone_management": false, 00:24:42.435 "zone_append": false, 00:24:42.435 "compare": false, 00:24:42.435 "compare_and_write": false, 00:24:42.435 "abort": false, 00:24:42.435 "seek_hole": true, 00:24:42.435 "seek_data": true, 00:24:42.435 "copy": false, 00:24:42.435 "nvme_iov_md": false 00:24:42.435 }, 00:24:42.435 "driver_specific": { 00:24:42.435 "lvol": { 00:24:42.435 "lvol_store_uuid": "c58c273b-d6f4-4559-9b89-26eee21e7fa3", 00:24:42.435 "base_bdev": "nvme0n1", 00:24:42.435 "thin_provision": true, 00:24:42.435 "num_allocated_clusters": 0, 00:24:42.435 "snapshot": false, 00:24:42.435 "clone": false, 00:24:42.435 "esnap_clone": false 00:24:42.435 } 00:24:42.435 } 00:24:42.435 } 00:24:42.435 ]' 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.435 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:42.693 13:44:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.952 { 00:24:42.952 "name": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:42.952 "aliases": [ 00:24:42.952 "lvs/nvme0n1p0" 00:24:42.952 ], 00:24:42.952 "product_name": "Logical Volume", 00:24:42.952 "block_size": 4096, 00:24:42.952 "num_blocks": 26476544, 00:24:42.952 "uuid": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:42.952 "assigned_rate_limits": { 00:24:42.952 "rw_ios_per_sec": 0, 00:24:42.952 "rw_mbytes_per_sec": 0, 00:24:42.952 "r_mbytes_per_sec": 0, 00:24:42.952 "w_mbytes_per_sec": 0 00:24:42.952 }, 00:24:42.952 "claimed": false, 00:24:42.952 "zoned": false, 00:24:42.952 "supported_io_types": { 00:24:42.952 "read": true, 00:24:42.952 "write": true, 00:24:42.952 "unmap": true, 00:24:42.952 "flush": false, 00:24:42.952 "reset": true, 00:24:42.952 "nvme_admin": false, 00:24:42.952 "nvme_io": false, 00:24:42.952 "nvme_io_md": false, 00:24:42.952 "write_zeroes": true, 00:24:42.952 "zcopy": false, 00:24:42.952 "get_zone_info": false, 00:24:42.952 "zone_management": false, 00:24:42.952 "zone_append": false, 00:24:42.952 "compare": false, 00:24:42.952 "compare_and_write": false, 00:24:42.952 "abort": false, 00:24:42.952 "seek_hole": true, 00:24:42.952 "seek_data": true, 00:24:42.952 "copy": false, 00:24:42.952 "nvme_iov_md": false 00:24:42.952 }, 00:24:42.952 "driver_specific": { 00:24:42.952 "lvol": { 00:24:42.952 "lvol_store_uuid": "c58c273b-d6f4-4559-9b89-26eee21e7fa3", 00:24:42.952 "base_bdev": "nvme0n1", 00:24:42.952 "thin_provision": true, 00:24:42.952 "num_allocated_clusters": 0, 00:24:42.952 "snapshot": false, 00:24:42.952 "clone": false, 00:24:42.952 "esnap_clone": false 00:24:42.952 } 00:24:42.952 } 00:24:42.952 } 00:24:42.952 ]' 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:42.952 13:44:42 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:43.211 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 093d6d17-4d79-42f5-8993-8e984c50dd2e 00:24:43.469 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.469 { 00:24:43.469 "name": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:43.469 "aliases": [ 00:24:43.469 "lvs/nvme0n1p0" 00:24:43.469 ], 00:24:43.469 "product_name": "Logical Volume", 00:24:43.469 "block_size": 4096, 00:24:43.469 "num_blocks": 26476544, 00:24:43.469 "uuid": "093d6d17-4d79-42f5-8993-8e984c50dd2e", 00:24:43.469 "assigned_rate_limits": { 00:24:43.469 "rw_ios_per_sec": 0, 00:24:43.469 "rw_mbytes_per_sec": 0, 00:24:43.470 "r_mbytes_per_sec": 0, 00:24:43.470 "w_mbytes_per_sec": 0 00:24:43.470 }, 00:24:43.470 "claimed": false, 00:24:43.470 "zoned": false, 00:24:43.470 "supported_io_types": { 00:24:43.470 "read": true, 00:24:43.470 "write": true, 00:24:43.470 "unmap": true, 00:24:43.470 "flush": false, 00:24:43.470 "reset": true, 00:24:43.470 "nvme_admin": false, 00:24:43.470 "nvme_io": false, 00:24:43.470 "nvme_io_md": false, 00:24:43.470 "write_zeroes": true, 00:24:43.470 "zcopy": false, 00:24:43.470 "get_zone_info": false, 00:24:43.470 "zone_management": false, 00:24:43.470 "zone_append": false, 00:24:43.470 "compare": false, 00:24:43.470 "compare_and_write": false, 00:24:43.470 "abort": false, 00:24:43.470 "seek_hole": true, 00:24:43.470 "seek_data": true, 00:24:43.470 "copy": false, 00:24:43.470 "nvme_iov_md": false 00:24:43.470 }, 00:24:43.470 "driver_specific": { 00:24:43.470 "lvol": { 00:24:43.470 "lvol_store_uuid": "c58c273b-d6f4-4559-9b89-26eee21e7fa3", 00:24:43.470 "base_bdev": "nvme0n1", 00:24:43.470 "thin_provision": true, 00:24:43.470 "num_allocated_clusters": 0, 00:24:43.470 "snapshot": false, 00:24:43.470 "clone": false, 00:24:43.470 "esnap_clone": false 00:24:43.470 } 00:24:43.470 } 00:24:43.470 } 00:24:43.470 ]' 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:43.470 13:44:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 093d6d17-4d79-42f5-8993-8e984c50dd2e -c nvc0n1p0 --l2p_dram_limit 20 00:24:43.729 [2024-11-20 13:44:42.932606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.932811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.729 [2024-11-20 13:44:42.932832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:43.729 [2024-11-20 13:44:42.932843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.932910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.932924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.729 [2024-11-20 13:44:42.932933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:43.729 [2024-11-20 13:44:42.932942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.932979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.729 [2024-11-20 13:44:42.933725] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.729 [2024-11-20 13:44:42.933750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.933760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.729 [2024-11-20 13:44:42.933769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:24:43.729 [2024-11-20 13:44:42.933778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.933836] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3d5d4e25-b4e6-4c39-936d-af510fa9acb3 00:24:43.729 [2024-11-20 13:44:42.934831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.934952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:43.729 [2024-11-20 13:44:42.934980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:43.729 [2024-11-20 13:44:42.934994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.940055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.940159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.729 [2024-11-20 13:44:42.940217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:24:43.729 [2024-11-20 13:44:42.940241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.940342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.940465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.729 [2024-11-20 13:44:42.940513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:43.729 [2024-11-20 13:44:42.940532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.940592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.940620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.729 [2024-11-20 13:44:42.940643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.729 [2024-11-20 13:44:42.940662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.940752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.729 [2024-11-20 13:44:42.944328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.944434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.729 [2024-11-20 13:44:42.944493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:24:43.729 [2024-11-20 13:44:42.944523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.944568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.944625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.729 [2024-11-20 13:44:42.944649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:43.729 [2024-11-20 13:44:42.944670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.944790] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:43.729 [2024-11-20 13:44:42.944961] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.729 [2024-11-20 13:44:42.945069] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.729 [2024-11-20 13:44:42.945107] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:43.729 [2024-11-20 13:44:42.945139] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.729 [2024-11-20 13:44:42.945171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.729 [2024-11-20 13:44:42.945306] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.729 [2024-11-20 13:44:42.945329] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.729 [2024-11-20 13:44:42.945349] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.729 [2024-11-20 13:44:42.945370] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.729 [2024-11-20 13:44:42.945460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.945488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.729 [2024-11-20 13:44:42.945508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:24:43.729 [2024-11-20 13:44:42.945531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.945627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.729 [2024-11-20 13:44:42.945653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.729 [2024-11-20 13:44:42.945673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:43.729 [2024-11-20 13:44:42.945695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.729 [2024-11-20 13:44:42.945829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.729 [2024-11-20 13:44:42.945861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.729 [2024-11-20 13:44:42.945907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.730 [2024-11-20 13:44:42.945959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.945997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.730 [2024-11-20 13:44:42.946018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.730 [2024-11-20 13:44:42.946140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.730 [2024-11-20 13:44:42.946184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.730 [2024-11-20 13:44:42.946204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.730 [2024-11-20 13:44:42.946222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.730 [2024-11-20 13:44:42.946308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.730 [2024-11-20 13:44:42.946331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.730 [2024-11-20 13:44:42.946353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.730 [2024-11-20 13:44:42.946392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.730 [2024-11-20 13:44:42.946496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.730 [2024-11-20 13:44:42.946554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.730 [2024-11-20 13:44:42.946640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.730 [2024-11-20 13:44:42.946731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.730 [2024-11-20 13:44:42.946854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.730 [2024-11-20 13:44:42.946873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.730 [2024-11-20 13:44:42.946893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.730 [2024-11-20 13:44:42.946911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.730 [2024-11-20 13:44:42.947010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.730 [2024-11-20 13:44:42.947033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.730 [2024-11-20 13:44:42.947052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.730 [2024-11-20 13:44:42.947071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.730 [2024-11-20 13:44:42.947092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.947182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.730 [2024-11-20 13:44:42.947206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.730 [2024-11-20 13:44:42.947225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.947245] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.730 [2024-11-20 13:44:42.947292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.730 [2024-11-20 13:44:42.947318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.730 [2024-11-20 13:44:42.947337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.730 [2024-11-20 13:44:42.947359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.730 [2024-11-20 13:44:42.947378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.730 [2024-11-20 13:44:42.947426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.730 [2024-11-20 13:44:42.947471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.730 [2024-11-20 13:44:42.947494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.730 [2024-11-20 13:44:42.947532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.730 [2024-11-20 13:44:42.947559] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.730 [2024-11-20 13:44:42.947591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.947655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.730 [2024-11-20 13:44:42.947686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.730 [2024-11-20 13:44:42.947716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.730 [2024-11-20 13:44:42.947772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.730 [2024-11-20 13:44:42.947876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.730 [2024-11-20 13:44:42.947909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.730 [2024-11-20 13:44:42.947940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.730 [2024-11-20 13:44:42.948013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.730 [2024-11-20 13:44:42.948050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.730 [2024-11-20 13:44:42.948079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.730 [2024-11-20 13:44:42.948313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.730 [2024-11-20 13:44:42.948343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.730 [2024-11-20 13:44:42.948494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.730 [2024-11-20 13:44:42.948526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.730 [2024-11-20 13:44:42.948607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.730 [2024-11-20 13:44:42.948620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.730 [2024-11-20 13:44:42.948630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.730 [2024-11-20 13:44:42.948640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:24:43.730 [2024-11-20 13:44:42.948647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.730 [2024-11-20 13:44:42.948699] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:43.730 [2024-11-20 13:44:42.948709] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:46.273 [2024-11-20 13:44:45.195901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.195959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:46.273 [2024-11-20 13:44:45.195994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2247.189 ms 00:24:46.273 [2024-11-20 13:44:45.196004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.221193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.221243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.273 [2024-11-20 13:44:45.221257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.895 ms 00:24:46.273 [2024-11-20 13:44:45.221265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.221425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.221436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.273 [2024-11-20 13:44:45.221448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:46.273 [2024-11-20 13:44:45.221456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.264740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.264791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.273 [2024-11-20 13:44:45.264807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.247 ms 00:24:46.273 [2024-11-20 13:44:45.264815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.264869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.264881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.273 [2024-11-20 13:44:45.264891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.273 [2024-11-20 13:44:45.264899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.265281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.265296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.273 [2024-11-20 13:44:45.265307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:46.273 [2024-11-20 13:44:45.265314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.265435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.265490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.273 [2024-11-20 13:44:45.265505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:46.273 [2024-11-20 13:44:45.265512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.278333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.278366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.273 [2024-11-20 13:44:45.278378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.802 ms 00:24:46.273 [2024-11-20 13:44:45.278385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.289581] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:46.273 [2024-11-20 13:44:45.294604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.294641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.273 [2024-11-20 13:44:45.294653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.140 ms 00:24:46.273 [2024-11-20 13:44:45.294664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.351916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.351988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:46.273 [2024-11-20 13:44:45.352002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.223 ms 00:24:46.273 [2024-11-20 13:44:45.352023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.352185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.352199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.273 [2024-11-20 13:44:45.352208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:24:46.273 [2024-11-20 13:44:45.352217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.375496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.375681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:46.273 [2024-11-20 13:44:45.375699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.232 ms 00:24:46.273 [2024-11-20 13:44:45.375709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.397622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.397661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:46.273 [2024-11-20 13:44:45.397673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.888 ms 00:24:46.273 [2024-11-20 13:44:45.397682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.398258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.398279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.273 [2024-11-20 13:44:45.398289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:24:46.273 [2024-11-20 13:44:45.398298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.466921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.466989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:46.273 [2024-11-20 13:44:45.467002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.593 ms 00:24:46.273 [2024-11-20 13:44:45.467012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-11-20 13:44:45.491208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-11-20 13:44:45.491263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:46.274 [2024-11-20 13:44:45.491277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.125 ms 00:24:46.274 [2024-11-20 13:44:45.491287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-11-20 13:44:45.514752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.274 [2024-11-20 13:44:45.514810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:46.274 [2024-11-20 13:44:45.514822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.424 ms 00:24:46.274 [2024-11-20 13:44:45.514831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-11-20 13:44:45.537676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.274 [2024-11-20 13:44:45.537847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.274 [2024-11-20 13:44:45.537864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.802 ms 00:24:46.274 [2024-11-20 13:44:45.537873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-11-20 13:44:45.537908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.274 [2024-11-20 13:44:45.537921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.274 [2024-11-20 13:44:45.537930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:46.274 [2024-11-20 13:44:45.537938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-11-20 13:44:45.538033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.274 [2024-11-20 13:44:45.538046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.274 [2024-11-20 13:44:45.538054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:46.274 [2024-11-20 13:44:45.538064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-11-20 13:44:45.538911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2605.910 ms, result 0 00:24:46.274 { 00:24:46.274 "name": "ftl0", 00:24:46.274 "uuid": "3d5d4e25-b4e6-4c39-936d-af510fa9acb3" 00:24:46.274 } 00:24:46.274 13:44:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:46.274 13:44:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:46.274 13:44:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:46.531 13:44:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:46.531 [2024-11-20 13:44:45.899333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:46.531 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:46.531 Zero copy mechanism will not be used. 00:24:46.531 Running I/O for 4 seconds... 00:24:48.836 3052.00 IOPS, 202.67 MiB/s [2024-11-20T13:44:49.195Z] 3110.50 IOPS, 206.56 MiB/s [2024-11-20T13:44:50.127Z] 3143.67 IOPS, 208.76 MiB/s [2024-11-20T13:44:50.127Z] 3089.50 IOPS, 205.16 MiB/s 00:24:50.700 Latency(us) 00:24:50.700 [2024-11-20T13:44:50.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.700 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:50.700 ftl0 : 4.00 3087.54 205.03 0.00 0.00 340.88 165.42 2760.07 00:24:50.700 [2024-11-20T13:44:50.127Z] =================================================================================================================== 00:24:50.700 [2024-11-20T13:44:50.127Z] Total : 3087.54 205.03 0.00 0.00 340.88 165.42 2760.07 00:24:50.700 [2024-11-20 13:44:49.911475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:50.700 { 00:24:50.700 "results": [ 00:24:50.700 { 00:24:50.700 "job": "ftl0", 00:24:50.700 "core_mask": "0x1", 00:24:50.700 "workload": "randwrite", 00:24:50.700 "status": "finished", 00:24:50.700 "queue_depth": 1, 00:24:50.700 "io_size": 69632, 00:24:50.700 "runtime": 4.002868, 00:24:50.700 "iops": 3087.5362365184164, 00:24:50.700 "mibps": 205.0317032063011, 00:24:50.700 "io_failed": 0, 00:24:50.700 "io_timeout": 0, 00:24:50.700 "avg_latency_us": 340.8772081385723, 00:24:50.700 "min_latency_us": 165.41538461538462, 00:24:50.700 "max_latency_us": 2760.0738461538463 00:24:50.700 } 00:24:50.700 ], 00:24:50.700 "core_count": 1 00:24:50.700 } 00:24:50.700 13:44:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:50.700 [2024-11-20 13:44:50.016112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:50.700 Running I/O for 4 seconds... 00:24:53.008 10904.00 IOPS, 42.59 MiB/s [2024-11-20T13:44:53.368Z] 10763.00 IOPS, 42.04 MiB/s [2024-11-20T13:44:54.350Z] 10523.67 IOPS, 41.11 MiB/s [2024-11-20T13:44:54.350Z] 9825.25 IOPS, 38.38 MiB/s 00:24:54.923 Latency(us) 00:24:54.923 [2024-11-20T13:44:54.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.923 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.923 ftl0 : 4.01 9820.80 38.36 0.00 0.00 13026.64 269.39 251658.24 00:24:54.923 [2024-11-20T13:44:54.350Z] =================================================================================================================== 00:24:54.923 [2024-11-20T13:44:54.350Z] Total : 9820.80 38.36 0.00 0.00 13026.64 0.00 251658.24 00:24:54.923 [2024-11-20 13:44:54.039930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:54.923 { 00:24:54.923 "results": [ 00:24:54.923 { 00:24:54.923 "job": "ftl0", 00:24:54.923 "core_mask": "0x1", 00:24:54.923 "workload": "randwrite", 00:24:54.923 "status": "finished", 00:24:54.923 "queue_depth": 128, 00:24:54.923 "io_size": 4096, 00:24:54.923 "runtime": 4.014847, 00:24:54.923 "iops": 9820.797654306627, 00:24:54.923 "mibps": 38.36249083713526, 00:24:54.923 "io_failed": 0, 00:24:54.923 "io_timeout": 0, 00:24:54.923 "avg_latency_us": 13026.637050667508, 00:24:54.923 "min_latency_us": 269.39076923076925, 00:24:54.923 "max_latency_us": 251658.24 00:24:54.923 } 00:24:54.923 ], 00:24:54.923 "core_count": 1 00:24:54.923 } 00:24:54.923 13:44:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:54.923 [2024-11-20 13:44:54.145638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:54.923 Running I/O for 4 seconds... 00:24:56.790 8759.00 IOPS, 34.21 MiB/s [2024-11-20T13:44:57.590Z] 8815.50 IOPS, 34.44 MiB/s [2024-11-20T13:44:58.181Z] 8845.33 IOPS, 34.55 MiB/s [2024-11-20T13:44:58.479Z] 8722.75 IOPS, 34.07 MiB/s 00:24:59.052 Latency(us) 00:24:59.052 [2024-11-20T13:44:58.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.052 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:59.052 Verification LBA range: start 0x0 length 0x1400000 00:24:59.052 ftl0 : 4.01 8725.81 34.09 0.00 0.00 14616.29 275.69 32667.18 00:24:59.052 [2024-11-20T13:44:58.479Z] =================================================================================================================== 00:24:59.052 [2024-11-20T13:44:58.479Z] Total : 8725.81 34.09 0.00 0.00 14616.29 0.00 32667.18 00:24:59.052 [2024-11-20 13:44:58.175559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:24:59.052 "results": [ 00:24:59.052 { 00:24:59.052 "job": "ftl0", 00:24:59.052 "core_mask": "0x1", 00:24:59.052 "workload": "verify", 00:24:59.052 "status": "finished", 00:24:59.052 "verify_range": { 00:24:59.052 "start": 0, 00:24:59.052 "length": 20971520 00:24:59.052 }, 00:24:59.052 "queue_depth": 128, 00:24:59.052 "io_size": 4096, 00:24:59.052 "runtime": 4.013037, 00:24:59.052 "iops": 8725.810402445828, 00:24:59.052 "mibps": 34.085196884554016, 00:24:59.052 "io_failed": 0, 00:24:59.052 "io_timeout": 0, 00:24:59.052 "avg_latency_us": 14616.285721089318, 00:24:59.052 "min_latency_us": 275.6923076923077, 00:24:59.052 "max_latency_us": 32667.175384615384 00:24:59.052 } 00:24:59.052 ], 00:24:59.052 "core_count": 1 00:24:59.052 } 00:24:59.052 l0 00:24:59.053 13:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:59.053 [2024-11-20 13:44:58.385833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.053 [2024-11-20 13:44:58.386048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:59.053 [2024-11-20 13:44:58.386116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:59.053 [2024-11-20 13:44:58.386143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.053 [2024-11-20 13:44:58.386185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:59.053 [2024-11-20 13:44:58.388755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.053 [2024-11-20 13:44:58.388877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:59.053 [2024-11-20 13:44:58.389058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:24:59.053 [2024-11-20 13:44:58.389084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.053 [2024-11-20 13:44:58.391029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.053 [2024-11-20 13:44:58.391132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:59.053 [2024-11-20 13:44:58.391193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:24:59.053 [2024-11-20 13:44:58.391222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.536362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.536509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:59.313 [2024-11-20 13:44:58.536534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 145.103 ms 00:24:59.313 [2024-11-20 13:44:58.536543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.542714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.542833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:59.313 [2024-11-20 13:44:58.542852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:24:59.313 [2024-11-20 13:44:58.542861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.565962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.566012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:59.313 [2024-11-20 13:44:58.566025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:24:59.313 [2024-11-20 13:44:58.566033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.580564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.580600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:59.313 [2024-11-20 13:44:58.580614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:24:59.313 [2024-11-20 13:44:58.580621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.580756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.580766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:59.313 [2024-11-20 13:44:58.580779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:59.313 [2024-11-20 13:44:58.580787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.603807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.603842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:59.313 [2024-11-20 13:44:58.603855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.005 ms 00:24:59.313 [2024-11-20 13:44:58.603863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.626275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.626309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:59.313 [2024-11-20 13:44:58.626323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.377 ms 00:24:59.313 [2024-11-20 13:44:58.626331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.648501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.648535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:59.313 [2024-11-20 13:44:58.648549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.134 ms 00:24:59.313 [2024-11-20 13:44:58.648556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.670577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.313 [2024-11-20 13:44:58.670613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:59.313 [2024-11-20 13:44:58.670630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.949 ms 00:24:59.313 [2024-11-20 13:44:58.670637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.313 [2024-11-20 13:44:58.670672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:59.313 [2024-11-20 13:44:58.670687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.670994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.671003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.671010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.671019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.671027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:59.313 [2024-11-20 13:44:58.671036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:59.314 [2024-11-20 13:44:58.671574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:59.314 [2024-11-20 13:44:58.671583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3d5d4e25-b4e6-4c39-936d-af510fa9acb3 00:24:59.314 [2024-11-20 13:44:58.671590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:59.314 [2024-11-20 13:44:58.671601] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:59.314 [2024-11-20 13:44:58.671608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:59.314 [2024-11-20 13:44:58.671617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:59.314 [2024-11-20 13:44:58.671623] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:59.314 [2024-11-20 13:44:58.671632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:59.314 [2024-11-20 13:44:58.671639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:59.314 [2024-11-20 13:44:58.671648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:59.314 [2024-11-20 13:44:58.671655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:59.314 [2024-11-20 13:44:58.671663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.314 [2024-11-20 13:44:58.671670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:59.314 [2024-11-20 13:44:58.671680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:24:59.314 [2024-11-20 13:44:58.671687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.684099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.314 [2024-11-20 13:44:58.684223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:59.314 [2024-11-20 13:44:58.684241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.382 ms 00:24:59.314 [2024-11-20 13:44:58.684249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.684587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.314 [2024-11-20 13:44:58.684597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:59.314 [2024-11-20 13:44:58.684606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:59.314 [2024-11-20 13:44:58.684613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.719156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.314 [2024-11-20 13:44:58.719274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:59.314 [2024-11-20 13:44:58.719296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.314 [2024-11-20 13:44:58.719303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.719362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.314 [2024-11-20 13:44:58.719370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:59.314 [2024-11-20 13:44:58.719379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.314 [2024-11-20 13:44:58.719386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.719473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.314 [2024-11-20 13:44:58.719484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:59.314 [2024-11-20 13:44:58.719493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.314 [2024-11-20 13:44:58.719500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.314 [2024-11-20 13:44:58.719517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.314 [2024-11-20 13:44:58.719524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:59.315 [2024-11-20 13:44:58.719533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.315 [2024-11-20 13:44:58.719540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.794669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.794838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:59.573 [2024-11-20 13:44:58.794860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.794868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.856505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.856552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.573 [2024-11-20 13:44:58.856566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.856574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.856647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.856659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.573 [2024-11-20 13:44:58.856669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.856676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.856735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.856744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.573 [2024-11-20 13:44:58.856754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.856761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.856846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.856872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.573 [2024-11-20 13:44:58.856887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.856895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.856928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.856936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:59.573 [2024-11-20 13:44:58.856945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.856952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.857009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.857019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.573 [2024-11-20 13:44:58.857046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.857054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.857094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.573 [2024-11-20 13:44:58.857110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.573 [2024-11-20 13:44:58.857118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.573 [2024-11-20 13:44:58.857126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.573 [2024-11-20 13:44:58.857241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 471.372 ms, result 0 00:24:59.573 true 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76211 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76211 ']' 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76211 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76211 00:24:59.573 killing process with pid 76211 00:24:59.573 Received shutdown signal, test time was about 4.000000 seconds 00:24:59.573 00:24:59.573 Latency(us) 00:24:59.573 [2024-11-20T13:44:59.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.573 [2024-11-20T13:44:59.000Z] =================================================================================================================== 00:24:59.573 [2024-11-20T13:44:59.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76211' 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76211 00:24:59.573 13:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76211 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:00.507 Remove shared memory files 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:00.507 ************************************ 00:25:00.507 END TEST ftl_bdevperf 00:25:00.507 ************************************ 00:25:00.507 00:25:00.507 real 0m20.811s 00:25:00.507 user 0m23.652s 00:25:00.507 sys 0m0.827s 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.507 13:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:00.507 13:44:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:00.507 13:44:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:00.507 13:44:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.507 13:44:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:00.507 ************************************ 00:25:00.507 START TEST ftl_trim 00:25:00.507 ************************************ 00:25:00.507 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:00.507 * Looking for test storage... 00:25:00.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.507 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.507 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.507 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.507 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:00.507 13:44:59 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.508 13:44:59 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.508 --rc genhtml_branch_coverage=1 00:25:00.508 --rc genhtml_function_coverage=1 00:25:00.508 --rc genhtml_legend=1 00:25:00.508 --rc geninfo_all_blocks=1 00:25:00.508 --rc geninfo_unexecuted_blocks=1 00:25:00.508 00:25:00.508 ' 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.508 --rc genhtml_branch_coverage=1 00:25:00.508 --rc genhtml_function_coverage=1 00:25:00.508 --rc genhtml_legend=1 00:25:00.508 --rc geninfo_all_blocks=1 00:25:00.508 --rc geninfo_unexecuted_blocks=1 00:25:00.508 00:25:00.508 ' 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.508 --rc genhtml_branch_coverage=1 00:25:00.508 --rc genhtml_function_coverage=1 00:25:00.508 --rc genhtml_legend=1 00:25:00.508 --rc geninfo_all_blocks=1 00:25:00.508 --rc geninfo_unexecuted_blocks=1 00:25:00.508 00:25:00.508 ' 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.508 --rc genhtml_branch_coverage=1 00:25:00.508 --rc genhtml_function_coverage=1 00:25:00.508 --rc genhtml_legend=1 00:25:00.508 --rc geninfo_all_blocks=1 00:25:00.508 --rc geninfo_unexecuted_blocks=1 00:25:00.508 00:25:00.508 ' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76547 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:00.508 13:44:59 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76547 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76547 ']' 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.508 13:44:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:00.768 [2024-11-20 13:44:59.966556] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:00.768 [2024-11-20 13:44:59.966815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76547 ] 00:25:00.768 [2024-11-20 13:45:00.128368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:01.026 [2024-11-20 13:45:00.258111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.026 [2024-11-20 13:45:00.258406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.026 [2024-11-20 13:45:00.258387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.593 13:45:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.593 13:45:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:01.593 13:45:00 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:01.851 13:45:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:01.851 13:45:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:01.851 13:45:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:01.851 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:01.851 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:01.851 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:01.851 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:01.851 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:02.110 { 00:25:02.110 "name": "nvme0n1", 00:25:02.110 "aliases": [ 00:25:02.110 "1bd35f00-ba50-4751-b778-de2c0d75a4dc" 00:25:02.110 ], 00:25:02.110 "product_name": "NVMe disk", 00:25:02.110 "block_size": 4096, 00:25:02.110 "num_blocks": 1310720, 00:25:02.110 "uuid": "1bd35f00-ba50-4751-b778-de2c0d75a4dc", 00:25:02.110 "numa_id": -1, 00:25:02.110 "assigned_rate_limits": { 00:25:02.110 "rw_ios_per_sec": 0, 00:25:02.110 "rw_mbytes_per_sec": 0, 00:25:02.110 "r_mbytes_per_sec": 0, 00:25:02.110 "w_mbytes_per_sec": 0 00:25:02.110 }, 00:25:02.110 "claimed": true, 00:25:02.110 "claim_type": "read_many_write_one", 00:25:02.110 "zoned": false, 00:25:02.110 "supported_io_types": { 00:25:02.110 "read": true, 00:25:02.110 "write": true, 00:25:02.110 "unmap": true, 00:25:02.110 "flush": true, 00:25:02.110 "reset": true, 00:25:02.110 "nvme_admin": true, 00:25:02.110 "nvme_io": true, 00:25:02.110 "nvme_io_md": false, 00:25:02.110 "write_zeroes": true, 00:25:02.110 "zcopy": false, 00:25:02.110 "get_zone_info": false, 00:25:02.110 "zone_management": false, 00:25:02.110 "zone_append": false, 00:25:02.110 "compare": true, 00:25:02.110 "compare_and_write": false, 00:25:02.110 "abort": true, 00:25:02.110 "seek_hole": false, 00:25:02.110 "seek_data": false, 00:25:02.110 "copy": true, 00:25:02.110 "nvme_iov_md": false 00:25:02.110 }, 00:25:02.110 "driver_specific": { 00:25:02.110 "nvme": [ 00:25:02.110 { 00:25:02.110 "pci_address": "0000:00:11.0", 00:25:02.110 "trid": { 00:25:02.110 "trtype": "PCIe", 00:25:02.110 "traddr": "0000:00:11.0" 00:25:02.110 }, 00:25:02.110 "ctrlr_data": { 00:25:02.110 "cntlid": 0, 00:25:02.110 "vendor_id": "0x1b36", 00:25:02.110 "model_number": "QEMU NVMe Ctrl", 00:25:02.110 "serial_number": "12341", 00:25:02.110 "firmware_revision": "8.0.0", 00:25:02.110 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:02.110 "oacs": { 00:25:02.110 "security": 0, 00:25:02.110 "format": 1, 00:25:02.110 "firmware": 0, 00:25:02.110 "ns_manage": 1 00:25:02.110 }, 00:25:02.110 "multi_ctrlr": false, 00:25:02.110 "ana_reporting": false 00:25:02.110 }, 00:25:02.110 "vs": { 00:25:02.110 "nvme_version": "1.4" 00:25:02.110 }, 00:25:02.110 "ns_data": { 00:25:02.110 "id": 1, 00:25:02.110 "can_share": false 00:25:02.110 } 00:25:02.110 } 00:25:02.110 ], 00:25:02.110 "mp_policy": "active_passive" 00:25:02.110 } 00:25:02.110 } 00:25:02.110 ]' 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:02.110 13:45:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:02.110 13:45:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:02.110 13:45:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:02.110 13:45:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:02.110 13:45:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:02.110 13:45:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:02.368 13:45:01 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c58c273b-d6f4-4559-9b89-26eee21e7fa3 00:25:02.368 13:45:01 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:02.368 13:45:01 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c58c273b-d6f4-4559-9b89-26eee21e7fa3 00:25:02.626 13:45:01 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:02.626 13:45:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=9fe4c930-151c-4e28-9cad-872c0321c2e5 00:25:02.626 13:45:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9fe4c930-151c-4e28-9cad-872c0321c2e5 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:02.884 13:45:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:02.884 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:02.884 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:02.884 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:02.884 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:02.884 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.142 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:03.142 { 00:25:03.142 "name": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:03.142 "aliases": [ 00:25:03.142 "lvs/nvme0n1p0" 00:25:03.142 ], 00:25:03.142 "product_name": "Logical Volume", 00:25:03.142 "block_size": 4096, 00:25:03.142 "num_blocks": 26476544, 00:25:03.142 "uuid": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:03.142 "assigned_rate_limits": { 00:25:03.142 "rw_ios_per_sec": 0, 00:25:03.142 "rw_mbytes_per_sec": 0, 00:25:03.142 "r_mbytes_per_sec": 0, 00:25:03.142 "w_mbytes_per_sec": 0 00:25:03.142 }, 00:25:03.142 "claimed": false, 00:25:03.142 "zoned": false, 00:25:03.142 "supported_io_types": { 00:25:03.142 "read": true, 00:25:03.142 "write": true, 00:25:03.142 "unmap": true, 00:25:03.142 "flush": false, 00:25:03.142 "reset": true, 00:25:03.142 "nvme_admin": false, 00:25:03.142 "nvme_io": false, 00:25:03.142 "nvme_io_md": false, 00:25:03.142 "write_zeroes": true, 00:25:03.142 "zcopy": false, 00:25:03.142 "get_zone_info": false, 00:25:03.142 "zone_management": false, 00:25:03.142 "zone_append": false, 00:25:03.142 "compare": false, 00:25:03.142 "compare_and_write": false, 00:25:03.142 "abort": false, 00:25:03.142 "seek_hole": true, 00:25:03.142 "seek_data": true, 00:25:03.142 "copy": false, 00:25:03.142 "nvme_iov_md": false 00:25:03.142 }, 00:25:03.142 "driver_specific": { 00:25:03.142 "lvol": { 00:25:03.143 "lvol_store_uuid": "9fe4c930-151c-4e28-9cad-872c0321c2e5", 00:25:03.143 "base_bdev": "nvme0n1", 00:25:03.143 "thin_provision": true, 00:25:03.143 "num_allocated_clusters": 0, 00:25:03.143 "snapshot": false, 00:25:03.143 "clone": false, 00:25:03.143 "esnap_clone": false 00:25:03.143 } 00:25:03.143 } 00:25:03.143 } 00:25:03.143 ]' 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:03.143 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:03.143 13:45:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:03.143 13:45:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:03.143 13:45:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:03.400 13:45:02 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:03.400 13:45:02 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:03.400 13:45:02 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.400 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.400 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:03.401 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:03.401 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:03.401 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.658 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:03.658 { 00:25:03.658 "name": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:03.658 "aliases": [ 00:25:03.658 "lvs/nvme0n1p0" 00:25:03.658 ], 00:25:03.658 "product_name": "Logical Volume", 00:25:03.658 "block_size": 4096, 00:25:03.658 "num_blocks": 26476544, 00:25:03.658 "uuid": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:03.658 "assigned_rate_limits": { 00:25:03.658 "rw_ios_per_sec": 0, 00:25:03.658 "rw_mbytes_per_sec": 0, 00:25:03.658 "r_mbytes_per_sec": 0, 00:25:03.658 "w_mbytes_per_sec": 0 00:25:03.658 }, 00:25:03.658 "claimed": false, 00:25:03.658 "zoned": false, 00:25:03.658 "supported_io_types": { 00:25:03.658 "read": true, 00:25:03.658 "write": true, 00:25:03.658 "unmap": true, 00:25:03.658 "flush": false, 00:25:03.658 "reset": true, 00:25:03.658 "nvme_admin": false, 00:25:03.658 "nvme_io": false, 00:25:03.658 "nvme_io_md": false, 00:25:03.658 "write_zeroes": true, 00:25:03.658 "zcopy": false, 00:25:03.658 "get_zone_info": false, 00:25:03.658 "zone_management": false, 00:25:03.658 "zone_append": false, 00:25:03.658 "compare": false, 00:25:03.658 "compare_and_write": false, 00:25:03.658 "abort": false, 00:25:03.658 "seek_hole": true, 00:25:03.658 "seek_data": true, 00:25:03.658 "copy": false, 00:25:03.658 "nvme_iov_md": false 00:25:03.658 }, 00:25:03.658 "driver_specific": { 00:25:03.658 "lvol": { 00:25:03.659 "lvol_store_uuid": "9fe4c930-151c-4e28-9cad-872c0321c2e5", 00:25:03.659 "base_bdev": "nvme0n1", 00:25:03.659 "thin_provision": true, 00:25:03.659 "num_allocated_clusters": 0, 00:25:03.659 "snapshot": false, 00:25:03.659 "clone": false, 00:25:03.659 "esnap_clone": false 00:25:03.659 } 00:25:03.659 } 00:25:03.659 } 00:25:03.659 ]' 00:25:03.659 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:03.659 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:03.659 13:45:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:03.659 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:03.659 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:03.659 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:03.659 13:45:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:03.659 13:45:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:03.917 13:45:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:03.917 13:45:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:03.917 13:45:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.917 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:03.917 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:03.917 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:03.917 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:03.917 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 638fb24a-f2c8-4562-9d78-bc30c0f77210 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:04.176 { 00:25:04.176 "name": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:04.176 "aliases": [ 00:25:04.176 "lvs/nvme0n1p0" 00:25:04.176 ], 00:25:04.176 "product_name": "Logical Volume", 00:25:04.176 "block_size": 4096, 00:25:04.176 "num_blocks": 26476544, 00:25:04.176 "uuid": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:04.176 "assigned_rate_limits": { 00:25:04.176 "rw_ios_per_sec": 0, 00:25:04.176 "rw_mbytes_per_sec": 0, 00:25:04.176 "r_mbytes_per_sec": 0, 00:25:04.176 "w_mbytes_per_sec": 0 00:25:04.176 }, 00:25:04.176 "claimed": false, 00:25:04.176 "zoned": false, 00:25:04.176 "supported_io_types": { 00:25:04.176 "read": true, 00:25:04.176 "write": true, 00:25:04.176 "unmap": true, 00:25:04.176 "flush": false, 00:25:04.176 "reset": true, 00:25:04.176 "nvme_admin": false, 00:25:04.176 "nvme_io": false, 00:25:04.176 "nvme_io_md": false, 00:25:04.176 "write_zeroes": true, 00:25:04.176 "zcopy": false, 00:25:04.176 "get_zone_info": false, 00:25:04.176 "zone_management": false, 00:25:04.176 "zone_append": false, 00:25:04.176 "compare": false, 00:25:04.176 "compare_and_write": false, 00:25:04.176 "abort": false, 00:25:04.176 "seek_hole": true, 00:25:04.176 "seek_data": true, 00:25:04.176 "copy": false, 00:25:04.176 "nvme_iov_md": false 00:25:04.176 }, 00:25:04.176 "driver_specific": { 00:25:04.176 "lvol": { 00:25:04.176 "lvol_store_uuid": "9fe4c930-151c-4e28-9cad-872c0321c2e5", 00:25:04.176 "base_bdev": "nvme0n1", 00:25:04.176 "thin_provision": true, 00:25:04.176 "num_allocated_clusters": 0, 00:25:04.176 "snapshot": false, 00:25:04.176 "clone": false, 00:25:04.176 "esnap_clone": false 00:25:04.176 } 00:25:04.176 } 00:25:04.176 } 00:25:04.176 ]' 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:04.176 13:45:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:04.176 13:45:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:04.176 13:45:03 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 638fb24a-f2c8-4562-9d78-bc30c0f77210 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:04.470 [2024-11-20 13:45:03.721376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.721427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:04.470 [2024-11-20 13:45:03.721444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.470 [2024-11-20 13:45:03.721452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.724190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.724227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.470 [2024-11-20 13:45:03.724239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.712 ms 00:25:04.470 [2024-11-20 13:45:03.724247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.724426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:04.470 [2024-11-20 13:45:03.725165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:04.470 [2024-11-20 13:45:03.725197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.725205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.470 [2024-11-20 13:45:03.725216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:25:04.470 [2024-11-20 13:45:03.725223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.725320] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 11e97a26-71f7-4abf-a886-7273f21decce 00:25:04.470 [2024-11-20 13:45:03.726321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.726352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:04.470 [2024-11-20 13:45:03.726361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:04.470 [2024-11-20 13:45:03.726370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.731263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.731452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.470 [2024-11-20 13:45:03.731469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.815 ms 00:25:04.470 [2024-11-20 13:45:03.731480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.731606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.731618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.470 [2024-11-20 13:45:03.731626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:04.470 [2024-11-20 13:45:03.731638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.470 [2024-11-20 13:45:03.731671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.470 [2024-11-20 13:45:03.731681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:04.470 [2024-11-20 13:45:03.731688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:04.471 [2024-11-20 13:45:03.731699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.471 [2024-11-20 13:45:03.731726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:04.471 [2024-11-20 13:45:03.735290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.471 [2024-11-20 13:45:03.735320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.471 [2024-11-20 13:45:03.735333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.567 ms 00:25:04.471 [2024-11-20 13:45:03.735341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.471 [2024-11-20 13:45:03.735383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.471 [2024-11-20 13:45:03.735392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:04.471 [2024-11-20 13:45:03.735401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:04.471 [2024-11-20 13:45:03.735421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.471 [2024-11-20 13:45:03.735453] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:04.471 [2024-11-20 13:45:03.735584] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:04.471 [2024-11-20 13:45:03.735599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:04.471 [2024-11-20 13:45:03.735609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:04.471 [2024-11-20 13:45:03.735620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:04.471 [2024-11-20 13:45:03.735629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:04.471 [2024-11-20 13:45:03.735638] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:04.471 [2024-11-20 13:45:03.735645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:04.471 [2024-11-20 13:45:03.735653] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:04.471 [2024-11-20 13:45:03.735662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:04.471 [2024-11-20 13:45:03.735671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.471 [2024-11-20 13:45:03.735679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:04.471 [2024-11-20 13:45:03.735687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:25:04.471 [2024-11-20 13:45:03.735694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.471 [2024-11-20 13:45:03.735797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.471 [2024-11-20 13:45:03.735806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:04.471 [2024-11-20 13:45:03.735815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:04.471 [2024-11-20 13:45:03.735822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.471 [2024-11-20 13:45:03.735947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:04.471 [2024-11-20 13:45:03.735956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:04.471 [2024-11-20 13:45:03.735965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.471 [2024-11-20 13:45:03.735989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.735999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:04.471 [2024-11-20 13:45:03.736006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:04.471 [2024-11-20 13:45:03.736029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.471 [2024-11-20 13:45:03.736045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:04.471 [2024-11-20 13:45:03.736052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:04.471 [2024-11-20 13:45:03.736060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.471 [2024-11-20 13:45:03.736067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:04.471 [2024-11-20 13:45:03.736075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:04.471 [2024-11-20 13:45:03.736082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:04.471 [2024-11-20 13:45:03.736099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:04.471 [2024-11-20 13:45:03.736130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:04.471 [2024-11-20 13:45:03.736152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:04.471 [2024-11-20 13:45:03.736177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:04.471 [2024-11-20 13:45:03.736198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:04.471 [2024-11-20 13:45:03.736223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.471 [2024-11-20 13:45:03.736237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:04.471 [2024-11-20 13:45:03.736244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:04.471 [2024-11-20 13:45:03.736252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.471 [2024-11-20 13:45:03.736258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:04.471 [2024-11-20 13:45:03.736266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:04.471 [2024-11-20 13:45:03.736272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:04.471 [2024-11-20 13:45:03.736286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:04.471 [2024-11-20 13:45:03.736294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:04.471 [2024-11-20 13:45:03.736309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:04.471 [2024-11-20 13:45:03.736316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.471 [2024-11-20 13:45:03.736331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:04.471 [2024-11-20 13:45:03.736342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:04.471 [2024-11-20 13:45:03.736349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:04.471 [2024-11-20 13:45:03.736357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:04.471 [2024-11-20 13:45:03.736363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:04.471 [2024-11-20 13:45:03.736371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:04.471 [2024-11-20 13:45:03.736381] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:04.471 [2024-11-20 13:45:03.736391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.471 [2024-11-20 13:45:03.736401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:04.471 [2024-11-20 13:45:03.736409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:04.471 [2024-11-20 13:45:03.736417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:04.471 [2024-11-20 13:45:03.736426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:04.471 [2024-11-20 13:45:03.736433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:04.471 [2024-11-20 13:45:03.736441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:04.472 [2024-11-20 13:45:03.736448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:04.472 [2024-11-20 13:45:03.736456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:04.472 [2024-11-20 13:45:03.736464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:04.472 [2024-11-20 13:45:03.736474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:04.472 [2024-11-20 13:45:03.736512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:04.472 [2024-11-20 13:45:03.736525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:04.472 [2024-11-20 13:45:03.736541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:04.472 [2024-11-20 13:45:03.736548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:04.472 [2024-11-20 13:45:03.736556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:04.472 [2024-11-20 13:45:03.736564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.472 [2024-11-20 13:45:03.736572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:04.472 [2024-11-20 13:45:03.736579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:25:04.472 [2024-11-20 13:45:03.736588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.472 [2024-11-20 13:45:03.736658] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:04.472 [2024-11-20 13:45:03.736676] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:06.997 [2024-11-20 13:45:06.168638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.168693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:06.997 [2024-11-20 13:45:06.168707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2431.970 ms 00:25:06.997 [2024-11-20 13:45:06.168718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.193847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.193896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.997 [2024-11-20 13:45:06.193908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.879 ms 00:25:06.997 [2024-11-20 13:45:06.193918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.194082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.194095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:06.997 [2024-11-20 13:45:06.194104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:06.997 [2024-11-20 13:45:06.194115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.242324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.242369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.997 [2024-11-20 13:45:06.242382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.164 ms 00:25:06.997 [2024-11-20 13:45:06.242393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.242477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.242491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.997 [2024-11-20 13:45:06.242500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:06.997 [2024-11-20 13:45:06.242509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.242815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.242840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.997 [2024-11-20 13:45:06.242849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:06.997 [2024-11-20 13:45:06.242858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.242993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.243146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.997 [2024-11-20 13:45:06.243157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:06.997 [2024-11-20 13:45:06.243168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.257280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.257392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.997 [2024-11-20 13:45:06.257453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:25:06.997 [2024-11-20 13:45:06.257479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.268756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:06.997 [2024-11-20 13:45:06.282620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.282742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:06.997 [2024-11-20 13:45:06.282803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.969 ms 00:25:06.997 [2024-11-20 13:45:06.282826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.347843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.348024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:06.997 [2024-11-20 13:45:06.348096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.930 ms 00:25:06.997 [2024-11-20 13:45:06.348120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.348329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.997 [2024-11-20 13:45:06.348364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:06.997 [2024-11-20 13:45:06.348379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:25:06.997 [2024-11-20 13:45:06.348387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.997 [2024-11-20 13:45:06.371582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.998 [2024-11-20 13:45:06.371693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:06.998 [2024-11-20 13:45:06.371747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.160 ms 00:25:06.998 [2024-11-20 13:45:06.371770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.998 [2024-11-20 13:45:06.394267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.998 [2024-11-20 13:45:06.394375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:06.998 [2024-11-20 13:45:06.394457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.437 ms 00:25:06.998 [2024-11-20 13:45:06.394479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.998 [2024-11-20 13:45:06.395119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.998 [2024-11-20 13:45:06.395203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:06.998 [2024-11-20 13:45:06.395257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:25:06.998 [2024-11-20 13:45:06.395279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.465654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.465812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:07.256 [2024-11-20 13:45:06.465870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.313 ms 00:25:07.256 [2024-11-20 13:45:06.465893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.489826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.489946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:07.256 [2024-11-20 13:45:06.490018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.823 ms 00:25:07.256 [2024-11-20 13:45:06.490042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.513786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.513904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:07.256 [2024-11-20 13:45:06.513988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.645 ms 00:25:07.256 [2024-11-20 13:45:06.514012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.536689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.536810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:07.256 [2024-11-20 13:45:06.536873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.593 ms 00:25:07.256 [2024-11-20 13:45:06.536909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.536991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.537021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:07.256 [2024-11-20 13:45:06.537046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:07.256 [2024-11-20 13:45:06.537094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.537190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.256 [2024-11-20 13:45:06.537220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:07.256 [2024-11-20 13:45:06.537242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:07.256 [2024-11-20 13:45:06.537261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.256 [2024-11-20 13:45:06.538232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:07.256 [2024-11-20 13:45:06.541322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2816.583 ms, result 0 00:25:07.256 [2024-11-20 13:45:06.541993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.256 { 00:25:07.256 "name": "ftl0", 00:25:07.256 "uuid": "11e97a26-71f7-4abf-a886-7273f21decce" 00:25:07.256 } 00:25:07.256 13:45:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:07.256 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:07.516 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:07.774 [ 00:25:07.774 { 00:25:07.774 "name": "ftl0", 00:25:07.774 "aliases": [ 00:25:07.774 "11e97a26-71f7-4abf-a886-7273f21decce" 00:25:07.774 ], 00:25:07.774 "product_name": "FTL disk", 00:25:07.774 "block_size": 4096, 00:25:07.774 "num_blocks": 23592960, 00:25:07.774 "uuid": "11e97a26-71f7-4abf-a886-7273f21decce", 00:25:07.774 "assigned_rate_limits": { 00:25:07.774 "rw_ios_per_sec": 0, 00:25:07.775 "rw_mbytes_per_sec": 0, 00:25:07.775 "r_mbytes_per_sec": 0, 00:25:07.775 "w_mbytes_per_sec": 0 00:25:07.775 }, 00:25:07.775 "claimed": false, 00:25:07.775 "zoned": false, 00:25:07.775 "supported_io_types": { 00:25:07.775 "read": true, 00:25:07.775 "write": true, 00:25:07.775 "unmap": true, 00:25:07.775 "flush": true, 00:25:07.775 "reset": false, 00:25:07.775 "nvme_admin": false, 00:25:07.775 "nvme_io": false, 00:25:07.775 "nvme_io_md": false, 00:25:07.775 "write_zeroes": true, 00:25:07.775 "zcopy": false, 00:25:07.775 "get_zone_info": false, 00:25:07.775 "zone_management": false, 00:25:07.775 "zone_append": false, 00:25:07.775 "compare": false, 00:25:07.775 "compare_and_write": false, 00:25:07.775 "abort": false, 00:25:07.775 "seek_hole": false, 00:25:07.775 "seek_data": false, 00:25:07.775 "copy": false, 00:25:07.775 "nvme_iov_md": false 00:25:07.775 }, 00:25:07.775 "driver_specific": { 00:25:07.775 "ftl": { 00:25:07.775 "base_bdev": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:07.775 "cache": "nvc0n1p0" 00:25:07.775 } 00:25:07.775 } 00:25:07.775 } 00:25:07.775 ] 00:25:07.775 13:45:06 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:07.775 13:45:06 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:07.775 13:45:06 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:07.775 13:45:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:07.775 13:45:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:08.034 13:45:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:08.034 { 00:25:08.034 "name": "ftl0", 00:25:08.034 "aliases": [ 00:25:08.034 "11e97a26-71f7-4abf-a886-7273f21decce" 00:25:08.034 ], 00:25:08.034 "product_name": "FTL disk", 00:25:08.034 "block_size": 4096, 00:25:08.034 "num_blocks": 23592960, 00:25:08.034 "uuid": "11e97a26-71f7-4abf-a886-7273f21decce", 00:25:08.034 "assigned_rate_limits": { 00:25:08.034 "rw_ios_per_sec": 0, 00:25:08.034 "rw_mbytes_per_sec": 0, 00:25:08.034 "r_mbytes_per_sec": 0, 00:25:08.034 "w_mbytes_per_sec": 0 00:25:08.034 }, 00:25:08.034 "claimed": false, 00:25:08.034 "zoned": false, 00:25:08.034 "supported_io_types": { 00:25:08.034 "read": true, 00:25:08.034 "write": true, 00:25:08.034 "unmap": true, 00:25:08.034 "flush": true, 00:25:08.034 "reset": false, 00:25:08.034 "nvme_admin": false, 00:25:08.034 "nvme_io": false, 00:25:08.034 "nvme_io_md": false, 00:25:08.034 "write_zeroes": true, 00:25:08.034 "zcopy": false, 00:25:08.034 "get_zone_info": false, 00:25:08.034 "zone_management": false, 00:25:08.034 "zone_append": false, 00:25:08.034 "compare": false, 00:25:08.034 "compare_and_write": false, 00:25:08.034 "abort": false, 00:25:08.034 "seek_hole": false, 00:25:08.034 "seek_data": false, 00:25:08.034 "copy": false, 00:25:08.034 "nvme_iov_md": false 00:25:08.034 }, 00:25:08.034 "driver_specific": { 00:25:08.034 "ftl": { 00:25:08.034 "base_bdev": "638fb24a-f2c8-4562-9d78-bc30c0f77210", 00:25:08.034 "cache": "nvc0n1p0" 00:25:08.034 } 00:25:08.034 } 00:25:08.034 } 00:25:08.034 ]' 00:25:08.034 13:45:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:08.034 13:45:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:08.034 13:45:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:08.291 [2024-11-20 13:45:07.645402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.291 [2024-11-20 13:45:07.645449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:08.291 [2024-11-20 13:45:07.645465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.291 [2024-11-20 13:45:07.645477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.291 [2024-11-20 13:45:07.645513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:08.291 [2024-11-20 13:45:07.648056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.291 [2024-11-20 13:45:07.648194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:08.291 [2024-11-20 13:45:07.648218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:25:08.291 [2024-11-20 13:45:07.648227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.291 [2024-11-20 13:45:07.648696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.291 [2024-11-20 13:45:07.648712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:08.291 [2024-11-20 13:45:07.648723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:25:08.291 [2024-11-20 13:45:07.648730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.292 [2024-11-20 13:45:07.652411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.292 [2024-11-20 13:45:07.652435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:08.292 [2024-11-20 13:45:07.652446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:25:08.292 [2024-11-20 13:45:07.652455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.292 [2024-11-20 13:45:07.659395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.292 [2024-11-20 13:45:07.659509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:08.292 [2024-11-20 13:45:07.659527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:25:08.292 [2024-11-20 13:45:07.659535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.292 [2024-11-20 13:45:07.682776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.292 [2024-11-20 13:45:07.682914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:08.292 [2024-11-20 13:45:07.682937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.155 ms 00:25:08.292 [2024-11-20 13:45:07.682945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.292 [2024-11-20 13:45:07.697700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.292 [2024-11-20 13:45:07.697857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:08.292 [2024-11-20 13:45:07.697878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:25:08.292 [2024-11-20 13:45:07.697889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.292 [2024-11-20 13:45:07.698128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.292 [2024-11-20 13:45:07.698140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:08.292 [2024-11-20 13:45:07.698151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:25:08.292 [2024-11-20 13:45:07.698158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-11-20 13:45:07.721079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.550 [2024-11-20 13:45:07.721120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:08.550 [2024-11-20 13:45:07.721133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.893 ms 00:25:08.550 [2024-11-20 13:45:07.721140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-11-20 13:45:07.743431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.550 [2024-11-20 13:45:07.743462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:08.550 [2024-11-20 13:45:07.743477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.230 ms 00:25:08.550 [2024-11-20 13:45:07.743484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-11-20 13:45:07.765492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.550 [2024-11-20 13:45:07.765522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:08.550 [2024-11-20 13:45:07.765534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.947 ms 00:25:08.550 [2024-11-20 13:45:07.765541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-11-20 13:45:07.787383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.550 [2024-11-20 13:45:07.787497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:08.550 [2024-11-20 13:45:07.787515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.744 ms 00:25:08.550 [2024-11-20 13:45:07.787522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-11-20 13:45:07.787573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:08.550 [2024-11-20 13:45:07.787589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:08.550 [2024-11-20 13:45:07.787862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.787993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:08.551 [2024-11-20 13:45:07.788446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:08.551 [2024-11-20 13:45:07.788457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:08.551 [2024-11-20 13:45:07.788464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:08.551 [2024-11-20 13:45:07.788473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:08.551 [2024-11-20 13:45:07.788479] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:08.551 [2024-11-20 13:45:07.788490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:08.551 [2024-11-20 13:45:07.788496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:08.551 [2024-11-20 13:45:07.788505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:08.551 [2024-11-20 13:45:07.788511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:08.551 [2024-11-20 13:45:07.788519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:08.551 [2024-11-20 13:45:07.788525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:08.551 [2024-11-20 13:45:07.788533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.551 [2024-11-20 13:45:07.788540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:08.551 [2024-11-20 13:45:07.788549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:25:08.551 [2024-11-20 13:45:07.788556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-11-20 13:45:07.800782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.551 [2024-11-20 13:45:07.800815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:08.551 [2024-11-20 13:45:07.800829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.192 ms 00:25:08.551 [2024-11-20 13:45:07.800837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-11-20 13:45:07.801250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.551 [2024-11-20 13:45:07.801271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:08.551 [2024-11-20 13:45:07.801281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:25:08.551 [2024-11-20 13:45:07.801288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-11-20 13:45:07.844772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.551 [2024-11-20 13:45:07.844820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.551 [2024-11-20 13:45:07.844834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.551 [2024-11-20 13:45:07.844843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-11-20 13:45:07.844961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.551 [2024-11-20 13:45:07.844986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.551 [2024-11-20 13:45:07.844997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.551 [2024-11-20 13:45:07.845004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-11-20 13:45:07.845071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.551 [2024-11-20 13:45:07.845080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.552 [2024-11-20 13:45:07.845093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.552 [2024-11-20 13:45:07.845101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.552 [2024-11-20 13:45:07.845127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.552 [2024-11-20 13:45:07.845135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.552 [2024-11-20 13:45:07.845144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.552 [2024-11-20 13:45:07.845151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.552 [2024-11-20 13:45:07.925551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.552 [2024-11-20 13:45:07.925599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.552 [2024-11-20 13:45:07.925611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.552 [2024-11-20 13:45:07.925619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.988576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.988620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.869 [2024-11-20 13:45:07.988633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.988641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.988717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.988727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.869 [2024-11-20 13:45:07.988750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.988760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.988812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.988820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.869 [2024-11-20 13:45:07.988829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.988836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.988947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.988957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.869 [2024-11-20 13:45:07.988966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.988997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.989056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.989066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.869 [2024-11-20 13:45:07.989075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.989082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.989129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.989137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.869 [2024-11-20 13:45:07.989148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.869 [2024-11-20 13:45:07.989155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.869 [2024-11-20 13:45:07.989205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.869 [2024-11-20 13:45:07.989215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.869 [2024-11-20 13:45:07.989224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.870 [2024-11-20 13:45:07.989231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.870 [2024-11-20 13:45:07.989391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.974 ms, result 0 00:25:08.870 true 00:25:08.870 13:45:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76547 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76547 ']' 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76547 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76547 00:25:08.870 killing process with pid 76547 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76547' 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76547 00:25:08.870 13:45:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76547 00:25:15.440 13:45:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:16.005 65536+0 records in 00:25:16.005 65536+0 records out 00:25:16.005 268435456 bytes (268 MB, 256 MiB) copied, 1.06439 s, 252 MB/s 00:25:16.005 13:45:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:16.005 [2024-11-20 13:45:15.325055] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:16.006 [2024-11-20 13:45:15.325295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76729 ] 00:25:16.264 [2024-11-20 13:45:15.480638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.264 [2024-11-20 13:45:15.631390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.537 [2024-11-20 13:45:15.887568] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.537 [2024-11-20 13:45:15.887630] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.812 [2024-11-20 13:45:16.042022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.042077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:16.812 [2024-11-20 13:45:16.042092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.812 [2024-11-20 13:45:16.042100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.044730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.044765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.812 [2024-11-20 13:45:16.044775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:25:16.812 [2024-11-20 13:45:16.044782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.044867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:16.812 [2024-11-20 13:45:16.045553] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:16.812 [2024-11-20 13:45:16.045578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.045586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.812 [2024-11-20 13:45:16.045594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:25:16.812 [2024-11-20 13:45:16.045602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.046665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:16.812 [2024-11-20 13:45:16.059059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.059101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:16.812 [2024-11-20 13:45:16.059114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.394 ms 00:25:16.812 [2024-11-20 13:45:16.059123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.059217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.059228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:16.812 [2024-11-20 13:45:16.059236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:16.812 [2024-11-20 13:45:16.059244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.063961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.063999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.812 [2024-11-20 13:45:16.064009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:25:16.812 [2024-11-20 13:45:16.064017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.064100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.064110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.812 [2024-11-20 13:45:16.064118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:16.812 [2024-11-20 13:45:16.064125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.064150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.812 [2024-11-20 13:45:16.064160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.812 [2024-11-20 13:45:16.064168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:16.812 [2024-11-20 13:45:16.064175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.812 [2024-11-20 13:45:16.064195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:16.812 [2024-11-20 13:45:16.067446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.813 [2024-11-20 13:45:16.067596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.813 [2024-11-20 13:45:16.067612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:25:16.813 [2024-11-20 13:45:16.067620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.813 [2024-11-20 13:45:16.067656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.813 [2024-11-20 13:45:16.067664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.813 [2024-11-20 13:45:16.067673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:16.813 [2024-11-20 13:45:16.067681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.813 [2024-11-20 13:45:16.067698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:16.813 [2024-11-20 13:45:16.067717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:16.813 [2024-11-20 13:45:16.067751] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:16.813 [2024-11-20 13:45:16.067766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:16.813 [2024-11-20 13:45:16.067867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:16.813 [2024-11-20 13:45:16.067877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.813 [2024-11-20 13:45:16.067888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:16.813 [2024-11-20 13:45:16.067897] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.813 [2024-11-20 13:45:16.067909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.813 [2024-11-20 13:45:16.067917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:16.813 [2024-11-20 13:45:16.067924] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.813 [2024-11-20 13:45:16.067931] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:16.813 [2024-11-20 13:45:16.067938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:16.813 [2024-11-20 13:45:16.067945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.813 [2024-11-20 13:45:16.067953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.813 [2024-11-20 13:45:16.067961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:25:16.813 [2024-11-20 13:45:16.067984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.813 [2024-11-20 13:45:16.068072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.813 [2024-11-20 13:45:16.068083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.813 [2024-11-20 13:45:16.068090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:16.813 [2024-11-20 13:45:16.068097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.813 [2024-11-20 13:45:16.068213] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.813 [2024-11-20 13:45:16.068223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.813 [2024-11-20 13:45:16.068231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.813 [2024-11-20 13:45:16.068253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.813 [2024-11-20 13:45:16.068274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.813 [2024-11-20 13:45:16.068288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.813 [2024-11-20 13:45:16.068294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:16.813 [2024-11-20 13:45:16.068301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.813 [2024-11-20 13:45:16.068313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.813 [2024-11-20 13:45:16.068320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:16.813 [2024-11-20 13:45:16.068326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.813 [2024-11-20 13:45:16.068339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.813 [2024-11-20 13:45:16.068360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.813 [2024-11-20 13:45:16.068379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.813 [2024-11-20 13:45:16.068398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.813 [2024-11-20 13:45:16.068417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.813 [2024-11-20 13:45:16.068436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.813 [2024-11-20 13:45:16.068449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.813 [2024-11-20 13:45:16.068455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:16.813 [2024-11-20 13:45:16.068461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.813 [2024-11-20 13:45:16.068468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:16.813 [2024-11-20 13:45:16.068475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:16.813 [2024-11-20 13:45:16.068481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:16.813 [2024-11-20 13:45:16.068493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:16.813 [2024-11-20 13:45:16.068500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.813 [2024-11-20 13:45:16.068514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.813 [2024-11-20 13:45:16.068521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.813 [2024-11-20 13:45:16.068537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.813 [2024-11-20 13:45:16.068543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.813 [2024-11-20 13:45:16.068550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.813 [2024-11-20 13:45:16.068557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.813 [2024-11-20 13:45:16.068563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.813 [2024-11-20 13:45:16.068570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.813 [2024-11-20 13:45:16.068577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.813 [2024-11-20 13:45:16.068586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.813 [2024-11-20 13:45:16.068594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:16.813 [2024-11-20 13:45:16.068602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:16.814 [2024-11-20 13:45:16.068609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:16.814 [2024-11-20 13:45:16.068616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:16.814 [2024-11-20 13:45:16.068623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:16.814 [2024-11-20 13:45:16.068629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:16.814 [2024-11-20 13:45:16.068636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:16.814 [2024-11-20 13:45:16.068643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:16.814 [2024-11-20 13:45:16.068650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:16.814 [2024-11-20 13:45:16.068657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:16.814 [2024-11-20 13:45:16.068692] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.814 [2024-11-20 13:45:16.068700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.814 [2024-11-20 13:45:16.068715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.814 [2024-11-20 13:45:16.068721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.814 [2024-11-20 13:45:16.068728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.814 [2024-11-20 13:45:16.068736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.068742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.814 [2024-11-20 13:45:16.068752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:25:16.814 [2024-11-20 13:45:16.068759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.094125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.094314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.814 [2024-11-20 13:45:16.094331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.315 ms 00:25:16.814 [2024-11-20 13:45:16.094339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.094477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.094491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:16.814 [2024-11-20 13:45:16.094500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:16.814 [2024-11-20 13:45:16.094506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.136915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.136965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.814 [2024-11-20 13:45:16.136994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.385 ms 00:25:16.814 [2024-11-20 13:45:16.137006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.137131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.137142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.814 [2024-11-20 13:45:16.137151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:16.814 [2024-11-20 13:45:16.137158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.137478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.137523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.814 [2024-11-20 13:45:16.137532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:25:16.814 [2024-11-20 13:45:16.137546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.137671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.137684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.814 [2024-11-20 13:45:16.137692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:16.814 [2024-11-20 13:45:16.137700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.151063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.151099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.814 [2024-11-20 13:45:16.151110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.344 ms 00:25:16.814 [2024-11-20 13:45:16.151118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.163666] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:16.814 [2024-11-20 13:45:16.163707] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:16.814 [2024-11-20 13:45:16.163721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.163729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:16.814 [2024-11-20 13:45:16.163739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.484 ms 00:25:16.814 [2024-11-20 13:45:16.163748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.189040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.189087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:16.814 [2024-11-20 13:45:16.189107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.207 ms 00:25:16.814 [2024-11-20 13:45:16.189116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.200609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.200757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:16.814 [2024-11-20 13:45:16.200773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.422 ms 00:25:16.814 [2024-11-20 13:45:16.200781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.212243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.212358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:16.814 [2024-11-20 13:45:16.212410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.393 ms 00:25:16.814 [2024-11-20 13:45:16.212432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.814 [2024-11-20 13:45:16.213107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.814 [2024-11-20 13:45:16.213195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:16.814 [2024-11-20 13:45:16.213302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:25:16.814 [2024-11-20 13:45:16.213333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.268098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.268284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:17.072 [2024-11-20 13:45:16.268340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.722 ms 00:25:17.072 [2024-11-20 13:45:16.268362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.278878] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:17.072 [2024-11-20 13:45:16.292747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.292881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:17.072 [2024-11-20 13:45:16.292994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.267 ms 00:25:17.072 [2024-11-20 13:45:16.293018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.293125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.293151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:17.072 [2024-11-20 13:45:16.293171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:17.072 [2024-11-20 13:45:16.293230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.293299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.293322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:17.072 [2024-11-20 13:45:16.293341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:17.072 [2024-11-20 13:45:16.293360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.293442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.293470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:17.072 [2024-11-20 13:45:16.293490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.072 [2024-11-20 13:45:16.293508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.293551] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:17.072 [2024-11-20 13:45:16.293602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.293624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:17.072 [2024-11-20 13:45:16.293643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:17.072 [2024-11-20 13:45:16.293661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.316310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.316439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:17.072 [2024-11-20 13:45:16.316491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.614 ms 00:25:17.072 [2024-11-20 13:45:16.316513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.316661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.072 [2024-11-20 13:45:16.316701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:17.072 [2024-11-20 13:45:16.316722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:17.072 [2024-11-20 13:45:16.316787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.072 [2024-11-20 13:45:16.317833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.072 [2024-11-20 13:45:16.321058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.542 ms, result 0 00:25:17.072 [2024-11-20 13:45:16.321739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:17.072 [2024-11-20 13:45:16.334626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:18.004  [2024-11-20T13:45:18.364Z] Copying: 44/256 [MB] (44 MBps) [2024-11-20T13:45:19.737Z] Copying: 86/256 [MB] (42 MBps) [2024-11-20T13:45:20.672Z] Copying: 129/256 [MB] (43 MBps) [2024-11-20T13:45:21.603Z] Copying: 171/256 [MB] (42 MBps) [2024-11-20T13:45:22.537Z] Copying: 213/256 [MB] (42 MBps) [2024-11-20T13:45:22.538Z] Copying: 256/256 [MB] (average 42 MBps)[2024-11-20 13:45:22.326943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.111 [2024-11-20 13:45:22.336064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.336191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.111 [2024-11-20 13:45:22.336251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:23.111 [2024-11-20 13:45:22.336281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.336317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:23.111 [2024-11-20 13:45:22.338938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.339055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.111 [2024-11-20 13:45:22.339112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:25:23.111 [2024-11-20 13:45:22.339133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.340926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.341040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.111 [2024-11-20 13:45:22.341097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:25:23.111 [2024-11-20 13:45:22.341118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.348235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.348340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.111 [2024-11-20 13:45:22.348397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:25:23.111 [2024-11-20 13:45:22.348408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.355422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.355524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:23.111 [2024-11-20 13:45:22.355538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:25:23.111 [2024-11-20 13:45:22.355546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.378499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.378635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.111 [2024-11-20 13:45:22.378652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.910 ms 00:25:23.111 [2024-11-20 13:45:22.378660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.392684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.392722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.111 [2024-11-20 13:45:22.392737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.989 ms 00:25:23.111 [2024-11-20 13:45:22.392745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.392879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.392888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.111 [2024-11-20 13:45:22.392897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:23.111 [2024-11-20 13:45:22.392904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.415915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.415956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:23.111 [2024-11-20 13:45:22.415980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.993 ms 00:25:23.111 [2024-11-20 13:45:22.415989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.438371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.438409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:23.111 [2024-11-20 13:45:22.438420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:25:23.111 [2024-11-20 13:45:22.438427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.460146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.460182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.111 [2024-11-20 13:45:22.460192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.679 ms 00:25:23.111 [2024-11-20 13:45:22.460200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.481894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.111 [2024-11-20 13:45:22.481928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.111 [2024-11-20 13:45:22.481938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.629 ms 00:25:23.111 [2024-11-20 13:45:22.481946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.111 [2024-11-20 13:45:22.481993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.111 [2024-11-20 13:45:22.482008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.111 [2024-11-20 13:45:22.482301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.112 [2024-11-20 13:45:22.482784] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.112 [2024-11-20 13:45:22.482792] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:23.112 [2024-11-20 13:45:22.482800] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:23.112 [2024-11-20 13:45:22.482807] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:23.112 [2024-11-20 13:45:22.482814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:23.112 [2024-11-20 13:45:22.482822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:23.112 [2024-11-20 13:45:22.482829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.112 [2024-11-20 13:45:22.482836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.112 [2024-11-20 13:45:22.482843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.112 [2024-11-20 13:45:22.482850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.112 [2024-11-20 13:45:22.482856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.112 [2024-11-20 13:45:22.482864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.112 [2024-11-20 13:45:22.482874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.112 [2024-11-20 13:45:22.482882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:25:23.112 [2024-11-20 13:45:22.482889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.112 [2024-11-20 13:45:22.495376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.112 [2024-11-20 13:45:22.495407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.112 [2024-11-20 13:45:22.495418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:25:23.112 [2024-11-20 13:45:22.495425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.112 [2024-11-20 13:45:22.495780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.112 [2024-11-20 13:45:22.495789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.112 [2024-11-20 13:45:22.495797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:23.112 [2024-11-20 13:45:22.495803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.112 [2024-11-20 13:45:22.530194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.112 [2024-11-20 13:45:22.530235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.112 [2024-11-20 13:45:22.530246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.112 [2024-11-20 13:45:22.530255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.112 [2024-11-20 13:45:22.530346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.112 [2024-11-20 13:45:22.530356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.112 [2024-11-20 13:45:22.530363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.112 [2024-11-20 13:45:22.530370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.113 [2024-11-20 13:45:22.530411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.113 [2024-11-20 13:45:22.530419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.113 [2024-11-20 13:45:22.530428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.113 [2024-11-20 13:45:22.530435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.113 [2024-11-20 13:45:22.530452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.113 [2024-11-20 13:45:22.530462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.113 [2024-11-20 13:45:22.530470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.113 [2024-11-20 13:45:22.530477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.606801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.606853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.371 [2024-11-20 13:45:22.606865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.606873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.371 [2024-11-20 13:45:22.669318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.371 [2024-11-20 13:45:22.669404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.371 [2024-11-20 13:45:22.669458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.371 [2024-11-20 13:45:22.669569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.371 [2024-11-20 13:45:22.669620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.371 [2024-11-20 13:45:22.669680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.371 [2024-11-20 13:45:22.669736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.371 [2024-11-20 13:45:22.669746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.371 [2024-11-20 13:45:22.669753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.371 [2024-11-20 13:45:22.669881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.814 ms, result 0 00:25:24.306 00:25:24.306 00:25:24.306 13:45:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76816 00:25:24.306 13:45:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:24.306 13:45:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76816 00:25:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76816 ']' 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.306 13:45:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:24.306 [2024-11-20 13:45:23.715919] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:24.306 [2024-11-20 13:45:23.716238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76816 ] 00:25:24.564 [2024-11-20 13:45:23.875370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.564 [2024-11-20 13:45:23.973958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.498 13:45:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.498 13:45:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:25.498 13:45:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:25.498 [2024-11-20 13:45:24.773001] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.498 [2024-11-20 13:45:24.773065] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.759 [2024-11-20 13:45:24.927249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.927300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.759 [2024-11-20 13:45:24.927315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.759 [2024-11-20 13:45:24.927323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.759 [2024-11-20 13:45:24.929948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.929996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.759 [2024-11-20 13:45:24.930008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:25:25.759 [2024-11-20 13:45:24.930015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.759 [2024-11-20 13:45:24.930086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.759 [2024-11-20 13:45:24.930734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.759 [2024-11-20 13:45:24.930854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.930864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.759 [2024-11-20 13:45:24.930874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:25:25.759 [2024-11-20 13:45:24.930882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.759 [2024-11-20 13:45:24.931922] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.759 [2024-11-20 13:45:24.944089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.944125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.759 [2024-11-20 13:45:24.944136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:25:25.759 [2024-11-20 13:45:24.944146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.759 [2024-11-20 13:45:24.944221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.944236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.759 [2024-11-20 13:45:24.944245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:25.759 [2024-11-20 13:45:24.944253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.759 [2024-11-20 13:45:24.948837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.759 [2024-11-20 13:45:24.948879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.760 [2024-11-20 13:45:24.948889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:25:25.760 [2024-11-20 13:45:24.948898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.949031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.949044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.760 [2024-11-20 13:45:24.949052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:25.760 [2024-11-20 13:45:24.949062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.949091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.949100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.760 [2024-11-20 13:45:24.949128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:25.760 [2024-11-20 13:45:24.949141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.949166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:25.760 [2024-11-20 13:45:24.952308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.952427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.760 [2024-11-20 13:45:24.952444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:25:25.760 [2024-11-20 13:45:24.952452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.952489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.952497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.760 [2024-11-20 13:45:24.952506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:25.760 [2024-11-20 13:45:24.952515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.952536] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.760 [2024-11-20 13:45:24.952552] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.760 [2024-11-20 13:45:24.952592] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.760 [2024-11-20 13:45:24.952606] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:25.760 [2024-11-20 13:45:24.952708] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.760 [2024-11-20 13:45:24.952718] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.760 [2024-11-20 13:45:24.952734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:25.760 [2024-11-20 13:45:24.952743] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.760 [2024-11-20 13:45:24.952754] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.760 [2024-11-20 13:45:24.952761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:25.760 [2024-11-20 13:45:24.952770] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.760 [2024-11-20 13:45:24.952777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.760 [2024-11-20 13:45:24.952788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.760 [2024-11-20 13:45:24.952795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.952803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.760 [2024-11-20 13:45:24.952811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:25.760 [2024-11-20 13:45:24.952819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.952915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.760 [2024-11-20 13:45:24.952924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.760 [2024-11-20 13:45:24.952932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:25.760 [2024-11-20 13:45:24.952940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.760 [2024-11-20 13:45:24.953063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.760 [2024-11-20 13:45:24.953076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.760 [2024-11-20 13:45:24.953084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.760 [2024-11-20 13:45:24.953112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.760 [2024-11-20 13:45:24.953138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.760 [2024-11-20 13:45:24.953153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.760 [2024-11-20 13:45:24.953166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:25.760 [2024-11-20 13:45:24.953172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.760 [2024-11-20 13:45:24.953180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.760 [2024-11-20 13:45:24.953187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:25.760 [2024-11-20 13:45:24.953194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.760 [2024-11-20 13:45:24.953209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.760 [2024-11-20 13:45:24.953235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.760 [2024-11-20 13:45:24.953259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.760 [2024-11-20 13:45:24.953279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.760 [2024-11-20 13:45:24.953301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.760 [2024-11-20 13:45:24.953321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.760 [2024-11-20 13:45:24.953336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.760 [2024-11-20 13:45:24.953343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:25.760 [2024-11-20 13:45:24.953350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.760 [2024-11-20 13:45:24.953359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.760 [2024-11-20 13:45:24.953365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:25.760 [2024-11-20 13:45:24.953374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.760 [2024-11-20 13:45:24.953389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:25.760 [2024-11-20 13:45:24.953395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953404] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.760 [2024-11-20 13:45:24.953413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.760 [2024-11-20 13:45:24.953422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.760 [2024-11-20 13:45:24.953437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.760 [2024-11-20 13:45:24.953443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.760 [2024-11-20 13:45:24.953451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.760 [2024-11-20 13:45:24.953458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.760 [2024-11-20 13:45:24.953465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.760 [2024-11-20 13:45:24.953472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.760 [2024-11-20 13:45:24.953481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.760 [2024-11-20 13:45:24.953490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.760 [2024-11-20 13:45:24.953502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:25.760 [2024-11-20 13:45:24.953509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:25.760 [2024-11-20 13:45:24.953518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:25.760 [2024-11-20 13:45:24.953525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:25.761 [2024-11-20 13:45:24.953533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:25.761 [2024-11-20 13:45:24.953540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:25.761 [2024-11-20 13:45:24.953549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:25.761 [2024-11-20 13:45:24.953555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:25.761 [2024-11-20 13:45:24.953564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:25.761 [2024-11-20 13:45:24.953571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:25.761 [2024-11-20 13:45:24.953611] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.761 [2024-11-20 13:45:24.953619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.761 [2024-11-20 13:45:24.953637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.761 [2024-11-20 13:45:24.953645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.761 [2024-11-20 13:45:24.953652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.761 [2024-11-20 13:45:24.953661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:24.953668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.761 [2024-11-20 13:45:24.953676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:25:25.761 [2024-11-20 13:45:24.953683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:24.978981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:24.979124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.761 [2024-11-20 13:45:24.979143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.230 ms 00:25:25.761 [2024-11-20 13:45:24.979154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:24.979285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:24.979294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.761 [2024-11-20 13:45:24.979304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:25.761 [2024-11-20 13:45:24.979311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.009210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.009245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.761 [2024-11-20 13:45:25.009257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.875 ms 00:25:25.761 [2024-11-20 13:45:25.009264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.009331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.009340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.761 [2024-11-20 13:45:25.009350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:25.761 [2024-11-20 13:45:25.009357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.009649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.009668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.761 [2024-11-20 13:45:25.009681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:25:25.761 [2024-11-20 13:45:25.009689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.009812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.009826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.761 [2024-11-20 13:45:25.009835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:25.761 [2024-11-20 13:45:25.009842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.023748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.023777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.761 [2024-11-20 13:45:25.023789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:25:25.761 [2024-11-20 13:45:25.023796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.043419] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:25.761 [2024-11-20 13:45:25.043552] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.761 [2024-11-20 13:45:25.043573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.043582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.761 [2024-11-20 13:45:25.043593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.658 ms 00:25:25.761 [2024-11-20 13:45:25.043601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.068151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.068198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.761 [2024-11-20 13:45:25.068211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.490 ms 00:25:25.761 [2024-11-20 13:45:25.068218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.079824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.079852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.761 [2024-11-20 13:45:25.079865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.534 ms 00:25:25.761 [2024-11-20 13:45:25.079873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.091028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.091057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:25.761 [2024-11-20 13:45:25.091068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:25:25.761 [2024-11-20 13:45:25.091076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.091681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.091699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:25.761 [2024-11-20 13:45:25.091709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:25:25.761 [2024-11-20 13:45:25.091716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.146073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.146260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:25.761 [2024-11-20 13:45:25.146283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.330 ms 00:25:25.761 [2024-11-20 13:45:25.146292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.156944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:25.761 [2024-11-20 13:45:25.170735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.170778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.761 [2024-11-20 13:45:25.170794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.342 ms 00:25:25.761 [2024-11-20 13:45:25.170803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.170890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.170902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.761 [2024-11-20 13:45:25.170910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:25.761 [2024-11-20 13:45:25.170919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.170964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.170992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.761 [2024-11-20 13:45:25.171000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:25.761 [2024-11-20 13:45:25.171012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.171034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.171044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.761 [2024-11-20 13:45:25.171051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.761 [2024-11-20 13:45:25.171062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.761 [2024-11-20 13:45:25.171092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.761 [2024-11-20 13:45:25.171105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.761 [2024-11-20 13:45:25.171112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.761 [2024-11-20 13:45:25.171124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:25.761 [2024-11-20 13:45:25.171130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.066 [2024-11-20 13:45:25.193795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.066 [2024-11-20 13:45:25.193828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.066 [2024-11-20 13:45:25.193843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.638 ms 00:25:26.066 [2024-11-20 13:45:25.193852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.066 [2024-11-20 13:45:25.193941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.066 [2024-11-20 13:45:25.193951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.067 [2024-11-20 13:45:25.193961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:26.067 [2024-11-20 13:45:25.193987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.067 [2024-11-20 13:45:25.194810] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.067 [2024-11-20 13:45:25.197696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.293 ms, result 0 00:25:26.067 [2024-11-20 13:45:25.198441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.067 Some configs were skipped because the RPC state that can call them passed over. 00:25:26.067 13:45:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:26.067 [2024-11-20 13:45:25.424534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.067 [2024-11-20 13:45:25.424712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.067 [2024-11-20 13:45:25.424771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:25:26.067 [2024-11-20 13:45:25.424798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.067 [2024-11-20 13:45:25.424865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.511 ms, result 0 00:25:26.067 true 00:25:26.067 13:45:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:26.324 [2024-11-20 13:45:25.632440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.324 [2024-11-20 13:45:25.632589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.324 [2024-11-20 13:45:25.632702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:25:26.324 [2024-11-20 13:45:25.632730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.324 [2024-11-20 13:45:25.632788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.171 ms, result 0 00:25:26.324 true 00:25:26.324 13:45:25 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76816 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76816 ']' 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76816 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76816 00:25:26.324 killing process with pid 76816 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76816' 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76816 00:25:26.324 13:45:25 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76816 00:25:27.259 [2024-11-20 13:45:26.404305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.404354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:27.259 [2024-11-20 13:45:26.404366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:27.259 [2024-11-20 13:45:26.404376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.404412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:27.259 [2024-11-20 13:45:26.406949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.406982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:27.259 [2024-11-20 13:45:26.406996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:25:27.259 [2024-11-20 13:45:26.407003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.407284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.407297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:27.259 [2024-11-20 13:45:26.407306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:25:27.259 [2024-11-20 13:45:26.407314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.411760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.411928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:27.259 [2024-11-20 13:45:26.411950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.427 ms 00:25:27.259 [2024-11-20 13:45:26.411957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.418926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.419051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:27.259 [2024-11-20 13:45:26.419069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:25:27.259 [2024-11-20 13:45:26.419077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.428441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.428473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:27.259 [2024-11-20 13:45:26.428487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.308 ms 00:25:27.259 [2024-11-20 13:45:26.428501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.259 [2024-11-20 13:45:26.435511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.259 [2024-11-20 13:45:26.435546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:27.259 [2024-11-20 13:45:26.435559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.971 ms 00:25:27.259 [2024-11-20 13:45:26.435568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.435705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.260 [2024-11-20 13:45:26.435715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:27.260 [2024-11-20 13:45:26.435725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:27.260 [2024-11-20 13:45:26.435733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.445383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.260 [2024-11-20 13:45:26.445411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:27.260 [2024-11-20 13:45:26.445422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.628 ms 00:25:27.260 [2024-11-20 13:45:26.445430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.454890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.260 [2024-11-20 13:45:26.455044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:27.260 [2024-11-20 13:45:26.455066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.422 ms 00:25:27.260 [2024-11-20 13:45:26.455073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.464016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.260 [2024-11-20 13:45:26.464045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:27.260 [2024-11-20 13:45:26.464059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.894 ms 00:25:27.260 [2024-11-20 13:45:26.464066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.472919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.260 [2024-11-20 13:45:26.472948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:27.260 [2024-11-20 13:45:26.472960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.791 ms 00:25:27.260 [2024-11-20 13:45:26.472988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.260 [2024-11-20 13:45:26.473022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:27.260 [2024-11-20 13:45:26.473036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:27.260 [2024-11-20 13:45:26.473638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:27.261 [2024-11-20 13:45:26.473875] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:27.261 [2024-11-20 13:45:26.473887] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:27.261 [2024-11-20 13:45:26.473902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:27.261 [2024-11-20 13:45:26.473914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:27.261 [2024-11-20 13:45:26.473921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:27.261 [2024-11-20 13:45:26.473930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:27.261 [2024-11-20 13:45:26.473937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:27.261 [2024-11-20 13:45:26.473946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:27.261 [2024-11-20 13:45:26.473953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:27.261 [2024-11-20 13:45:26.473961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:27.261 [2024-11-20 13:45:26.473977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:27.261 [2024-11-20 13:45:26.473987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.261 [2024-11-20 13:45:26.473994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:27.261 [2024-11-20 13:45:26.474003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:25:27.261 [2024-11-20 13:45:26.474011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.486644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.261 [2024-11-20 13:45:26.486767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:27.261 [2024-11-20 13:45:26.486788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.611 ms 00:25:27.261 [2024-11-20 13:45:26.486795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.487196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.261 [2024-11-20 13:45:26.487208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:27.261 [2024-11-20 13:45:26.487218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:25:27.261 [2024-11-20 13:45:26.487228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.530555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.530598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.261 [2024-11-20 13:45:26.530612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.530620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.530736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.530745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.261 [2024-11-20 13:45:26.530755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.530765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.530809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.530817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.261 [2024-11-20 13:45:26.530829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.530836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.530855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.530863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.261 [2024-11-20 13:45:26.530872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.530879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.607736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.607785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.261 [2024-11-20 13:45:26.607799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.607806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.261 [2024-11-20 13:45:26.672417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.261 [2024-11-20 13:45:26.672535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.261 [2024-11-20 13:45:26.672588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.261 [2024-11-20 13:45:26.672702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:27.261 [2024-11-20 13:45:26.672759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.261 [2024-11-20 13:45:26.672819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.672884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.261 [2024-11-20 13:45:26.672899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.261 [2024-11-20 13:45:26.672912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.261 [2024-11-20 13:45:26.672922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.261 [2024-11-20 13:45:26.673080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.754 ms, result 0 00:25:28.194 13:45:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:28.194 13:45:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:28.194 [2024-11-20 13:45:27.402577] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:28.194 [2024-11-20 13:45:27.402694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:25:28.194 [2024-11-20 13:45:27.563430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.454 [2024-11-20 13:45:27.662006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.713 [2024-11-20 13:45:27.916366] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.713 [2024-11-20 13:45:27.916427] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.713 [2024-11-20 13:45:28.070580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.070625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:28.713 [2024-11-20 13:45:28.070637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.713 [2024-11-20 13:45:28.070646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.073284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.073418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.713 [2024-11-20 13:45:28.073435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:25:28.713 [2024-11-20 13:45:28.073442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.073554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:28.713 [2024-11-20 13:45:28.074255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:28.713 [2024-11-20 13:45:28.074280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.074289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.713 [2024-11-20 13:45:28.074297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:25:28.713 [2024-11-20 13:45:28.074305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.075345] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:28.713 [2024-11-20 13:45:28.087597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.087642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:28.713 [2024-11-20 13:45:28.087653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:25:28.713 [2024-11-20 13:45:28.087661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.087747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.087758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:28.713 [2024-11-20 13:45:28.087766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:28.713 [2024-11-20 13:45:28.087773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.092372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.092500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.713 [2024-11-20 13:45:28.092514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.559 ms 00:25:28.713 [2024-11-20 13:45:28.092522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.092613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.092623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.713 [2024-11-20 13:45:28.092631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:28.713 [2024-11-20 13:45:28.092638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.092662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.092672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:28.713 [2024-11-20 13:45:28.092680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:28.713 [2024-11-20 13:45:28.092687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.092706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:28.713 [2024-11-20 13:45:28.095912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.096042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.713 [2024-11-20 13:45:28.096057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:25:28.713 [2024-11-20 13:45:28.096065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.096100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.713 [2024-11-20 13:45:28.096108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:28.713 [2024-11-20 13:45:28.096116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:28.713 [2024-11-20 13:45:28.096123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.713 [2024-11-20 13:45:28.096140] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:28.713 [2024-11-20 13:45:28.096161] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:28.713 [2024-11-20 13:45:28.096194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:28.713 [2024-11-20 13:45:28.096209] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:28.713 [2024-11-20 13:45:28.096310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:28.714 [2024-11-20 13:45:28.096320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:28.714 [2024-11-20 13:45:28.096330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:28.714 [2024-11-20 13:45:28.096339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096357] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:28.714 [2024-11-20 13:45:28.096364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:28.714 [2024-11-20 13:45:28.096372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:28.714 [2024-11-20 13:45:28.096378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:28.714 [2024-11-20 13:45:28.096385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.714 [2024-11-20 13:45:28.096393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:28.714 [2024-11-20 13:45:28.096400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:25:28.714 [2024-11-20 13:45:28.096407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.714 [2024-11-20 13:45:28.096493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.714 [2024-11-20 13:45:28.096504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:28.714 [2024-11-20 13:45:28.096511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:28.714 [2024-11-20 13:45:28.096517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.714 [2024-11-20 13:45:28.096629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:28.714 [2024-11-20 13:45:28.096639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:28.714 [2024-11-20 13:45:28.096647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:28.714 [2024-11-20 13:45:28.096669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:28.714 [2024-11-20 13:45:28.096690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.714 [2024-11-20 13:45:28.096703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:28.714 [2024-11-20 13:45:28.096710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:28.714 [2024-11-20 13:45:28.096717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.714 [2024-11-20 13:45:28.096728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:28.714 [2024-11-20 13:45:28.096735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:28.714 [2024-11-20 13:45:28.096741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:28.714 [2024-11-20 13:45:28.096754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:28.714 [2024-11-20 13:45:28.096774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:28.714 [2024-11-20 13:45:28.096793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:28.714 [2024-11-20 13:45:28.096812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:28.714 [2024-11-20 13:45:28.096831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:28.714 [2024-11-20 13:45:28.096871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.714 [2024-11-20 13:45:28.096884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:28.714 [2024-11-20 13:45:28.096890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:28.714 [2024-11-20 13:45:28.096896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.714 [2024-11-20 13:45:28.096903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:28.714 [2024-11-20 13:45:28.096909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:28.714 [2024-11-20 13:45:28.096916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:28.714 [2024-11-20 13:45:28.096929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:28.714 [2024-11-20 13:45:28.096936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:28.714 [2024-11-20 13:45:28.096950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:28.714 [2024-11-20 13:45:28.096957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.714 [2024-11-20 13:45:28.096977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.714 [2024-11-20 13:45:28.096986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:28.714 [2024-11-20 13:45:28.096993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:28.714 [2024-11-20 13:45:28.096999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:28.714 [2024-11-20 13:45:28.097006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:28.714 [2024-11-20 13:45:28.097014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:28.714 [2024-11-20 13:45:28.097021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:28.714 [2024-11-20 13:45:28.097029] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:28.715 [2024-11-20 13:45:28.097039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:28.715 [2024-11-20 13:45:28.097054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:28.715 [2024-11-20 13:45:28.097061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:28.715 [2024-11-20 13:45:28.097068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:28.715 [2024-11-20 13:45:28.097075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:28.715 [2024-11-20 13:45:28.097082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:28.715 [2024-11-20 13:45:28.097089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:28.715 [2024-11-20 13:45:28.097096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:28.715 [2024-11-20 13:45:28.097102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:28.715 [2024-11-20 13:45:28.097109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:28.715 [2024-11-20 13:45:28.097143] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:28.715 [2024-11-20 13:45:28.097151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.715 [2024-11-20 13:45:28.097166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:28.715 [2024-11-20 13:45:28.097174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:28.715 [2024-11-20 13:45:28.097180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:28.715 [2024-11-20 13:45:28.097188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.715 [2024-11-20 13:45:28.097194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:28.715 [2024-11-20 13:45:28.097204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:25:28.715 [2024-11-20 13:45:28.097211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.715 [2024-11-20 13:45:28.122523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.715 [2024-11-20 13:45:28.122558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.715 [2024-11-20 13:45:28.122569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.259 ms 00:25:28.715 [2024-11-20 13:45:28.122576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.715 [2024-11-20 13:45:28.122699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.715 [2024-11-20 13:45:28.122712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:28.715 [2024-11-20 13:45:28.122720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:28.715 [2024-11-20 13:45:28.122727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.973 [2024-11-20 13:45:28.166412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.973 [2024-11-20 13:45:28.166461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.973 [2024-11-20 13:45:28.166474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.664 ms 00:25:28.973 [2024-11-20 13:45:28.166485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.973 [2024-11-20 13:45:28.166596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.973 [2024-11-20 13:45:28.166608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.973 [2024-11-20 13:45:28.166617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:28.973 [2024-11-20 13:45:28.166624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.973 [2024-11-20 13:45:28.166940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.973 [2024-11-20 13:45:28.166954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.973 [2024-11-20 13:45:28.166963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:25:28.974 [2024-11-20 13:45:28.166999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.167136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.167145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.974 [2024-11-20 13:45:28.167154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:28.974 [2024-11-20 13:45:28.167161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.180212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.180246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.974 [2024-11-20 13:45:28.180257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:25:28.974 [2024-11-20 13:45:28.180264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.192244] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:28.974 [2024-11-20 13:45:28.192293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.974 [2024-11-20 13:45:28.192305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.192312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.974 [2024-11-20 13:45:28.192322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.938 ms 00:25:28.974 [2024-11-20 13:45:28.192329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.216305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.216349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:28.974 [2024-11-20 13:45:28.216360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.896 ms 00:25:28.974 [2024-11-20 13:45:28.216368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.228032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.228061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:28.974 [2024-11-20 13:45:28.228071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.585 ms 00:25:28.974 [2024-11-20 13:45:28.228078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.239066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.239094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:28.974 [2024-11-20 13:45:28.239105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.925 ms 00:25:28.974 [2024-11-20 13:45:28.239113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.239720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.239739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:28.974 [2024-11-20 13:45:28.239748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:25:28.974 [2024-11-20 13:45:28.239755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.293903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.293965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:28.974 [2024-11-20 13:45:28.293999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.125 ms 00:25:28.974 [2024-11-20 13:45:28.294008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.304387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:28.974 [2024-11-20 13:45:28.318212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.318254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:28.974 [2024-11-20 13:45:28.318267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:25:28.974 [2024-11-20 13:45:28.318279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.318370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.318381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:28.974 [2024-11-20 13:45:28.318390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.974 [2024-11-20 13:45:28.318398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.318445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.318453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:28.974 [2024-11-20 13:45:28.318462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:28.974 [2024-11-20 13:45:28.318469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.318499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.318507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:28.974 [2024-11-20 13:45:28.318515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.974 [2024-11-20 13:45:28.318522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.318553] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:28.974 [2024-11-20 13:45:28.318562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.318570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:28.974 [2024-11-20 13:45:28.318577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:28.974 [2024-11-20 13:45:28.318584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.341337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.341473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:28.974 [2024-11-20 13:45:28.341490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.733 ms 00:25:28.974 [2024-11-20 13:45:28.341498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.341585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.974 [2024-11-20 13:45:28.341596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:28.974 [2024-11-20 13:45:28.341604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:28.974 [2024-11-20 13:45:28.341612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.974 [2024-11-20 13:45:28.342432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:28.974 [2024-11-20 13:45:28.345392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.587 ms, result 0 00:25:28.974 [2024-11-20 13:45:28.346151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.974 [2024-11-20 13:45:28.358861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.345  [2024-11-20T13:45:30.704Z] Copying: 45/256 [MB] (45 MBps) [2024-11-20T13:45:31.637Z] Copying: 91/256 [MB] (45 MBps) [2024-11-20T13:45:32.570Z] Copying: 132/256 [MB] (41 MBps) [2024-11-20T13:45:33.503Z] Copying: 177/256 [MB] (44 MBps) [2024-11-20T13:45:34.440Z] Copying: 220/256 [MB] (43 MBps) [2024-11-20T13:45:34.440Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-20 13:45:34.150528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:35.013 [2024-11-20 13:45:34.159740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.013 [2024-11-20 13:45:34.159902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:35.013 [2024-11-20 13:45:34.159922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.014 [2024-11-20 13:45:34.159940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.159965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:35.014 [2024-11-20 13:45:34.162547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.162575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:35.014 [2024-11-20 13:45:34.162586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:25:35.014 [2024-11-20 13:45:34.162595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.162847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.162861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:35.014 [2024-11-20 13:45:34.162870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:25:35.014 [2024-11-20 13:45:34.162879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.166578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.166687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:35.014 [2024-11-20 13:45:34.166700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:25:35.014 [2024-11-20 13:45:34.166707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.173660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.173756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:35.014 [2024-11-20 13:45:34.173770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:25:35.014 [2024-11-20 13:45:34.173778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.196508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.196540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:35.014 [2024-11-20 13:45:34.196552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.676 ms 00:25:35.014 [2024-11-20 13:45:34.196560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.210071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.210110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.014 [2024-11-20 13:45:34.210124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.468 ms 00:25:35.014 [2024-11-20 13:45:34.210132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.210266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.210277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.014 [2024-11-20 13:45:34.210286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:35.014 [2024-11-20 13:45:34.210293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.232444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.232479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.014 [2024-11-20 13:45:34.232490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.128 ms 00:25:35.014 [2024-11-20 13:45:34.232498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.254610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.254643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.014 [2024-11-20 13:45:34.254653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.088 ms 00:25:35.014 [2024-11-20 13:45:34.254661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.277032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.277064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.014 [2024-11-20 13:45:34.277075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.348 ms 00:25:35.014 [2024-11-20 13:45:34.277082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.298869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.014 [2024-11-20 13:45:34.298902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.014 [2024-11-20 13:45:34.298912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:25:35.014 [2024-11-20 13:45:34.298920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.014 [2024-11-20 13:45:34.298965] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.014 [2024-11-20 13:45:34.298993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.014 [2024-11-20 13:45:34.299245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.015 [2024-11-20 13:45:34.299751] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.015 [2024-11-20 13:45:34.299759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:35.015 [2024-11-20 13:45:34.299767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:35.015 [2024-11-20 13:45:34.299774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.015 [2024-11-20 13:45:34.299781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.015 [2024-11-20 13:45:34.299789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.015 [2024-11-20 13:45:34.299796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.015 [2024-11-20 13:45:34.299803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.015 [2024-11-20 13:45:34.299810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.015 [2024-11-20 13:45:34.299817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.015 [2024-11-20 13:45:34.299824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.015 [2024-11-20 13:45:34.299830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.016 [2024-11-20 13:45:34.299840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.016 [2024-11-20 13:45:34.299847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:25:35.016 [2024-11-20 13:45:34.299854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.311883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.016 [2024-11-20 13:45:34.312045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.016 [2024-11-20 13:45:34.312061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.011 ms 00:25:35.016 [2024-11-20 13:45:34.312068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.312423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.016 [2024-11-20 13:45:34.312433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.016 [2024-11-20 13:45:34.312442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:25:35.016 [2024-11-20 13:45:34.312449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.346791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.016 [2024-11-20 13:45:34.346825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.016 [2024-11-20 13:45:34.346835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.016 [2024-11-20 13:45:34.346843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.346922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.016 [2024-11-20 13:45:34.346930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.016 [2024-11-20 13:45:34.346938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.016 [2024-11-20 13:45:34.346946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.347001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.016 [2024-11-20 13:45:34.347011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.016 [2024-11-20 13:45:34.347018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.016 [2024-11-20 13:45:34.347026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.347043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.016 [2024-11-20 13:45:34.347054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.016 [2024-11-20 13:45:34.347061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.016 [2024-11-20 13:45:34.347068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.016 [2024-11-20 13:45:34.422362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.016 [2024-11-20 13:45:34.422512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.016 [2024-11-20 13:45:34.422527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.016 [2024-11-20 13:45:34.422535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.275 [2024-11-20 13:45:34.485505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.275 [2024-11-20 13:45:34.485586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.275 [2024-11-20 13:45:34.485640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.275 [2024-11-20 13:45:34.485756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.275 [2024-11-20 13:45:34.485809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.275 [2024-11-20 13:45:34.485869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.485915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.275 [2024-11-20 13:45:34.485924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.275 [2024-11-20 13:45:34.485934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.275 [2024-11-20 13:45:34.485941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.275 [2024-11-20 13:45:34.486090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.344 ms, result 0 00:25:35.841 00:25:35.841 00:25:35.841 13:45:35 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:35.841 13:45:35 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:36.410 13:45:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.410 [2024-11-20 13:45:35.778486] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:36.410 [2024-11-20 13:45:35.778762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:25:36.669 [2024-11-20 13:45:35.939243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.669 [2024-11-20 13:45:36.037993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.926 [2024-11-20 13:45:36.292682] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.926 [2024-11-20 13:45:36.292739] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.186 [2024-11-20 13:45:36.445926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.445997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.186 [2024-11-20 13:45:36.446010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:37.186 [2024-11-20 13:45:36.446019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.448616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.448763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.186 [2024-11-20 13:45:36.448781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:25:37.186 [2024-11-20 13:45:36.448789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.448915] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.186 [2024-11-20 13:45:36.449591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.186 [2024-11-20 13:45:36.449612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.449621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.186 [2024-11-20 13:45:36.449630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:25:37.186 [2024-11-20 13:45:36.449637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.450700] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:37.186 [2024-11-20 13:45:36.462710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.462745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:37.186 [2024-11-20 13:45:36.462757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.011 ms 00:25:37.186 [2024-11-20 13:45:36.462765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.462846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.462857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:37.186 [2024-11-20 13:45:36.462866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:37.186 [2024-11-20 13:45:36.462873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.467632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.467660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.186 [2024-11-20 13:45:36.467670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.719 ms 00:25:37.186 [2024-11-20 13:45:36.467677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.467763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.467772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.186 [2024-11-20 13:45:36.467779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:37.186 [2024-11-20 13:45:36.467791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.467814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.467824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.186 [2024-11-20 13:45:36.467832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:37.186 [2024-11-20 13:45:36.467839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.186 [2024-11-20 13:45:36.467859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:37.186 [2024-11-20 13:45:36.471144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.186 [2024-11-20 13:45:36.471169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.186 [2024-11-20 13:45:36.471178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:25:37.187 [2024-11-20 13:45:36.471186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.187 [2024-11-20 13:45:36.471219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.187 [2024-11-20 13:45:36.471227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.187 [2024-11-20 13:45:36.471235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:37.187 [2024-11-20 13:45:36.471243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.187 [2024-11-20 13:45:36.471260] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:37.187 [2024-11-20 13:45:36.471280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:37.187 [2024-11-20 13:45:36.471314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:37.187 [2024-11-20 13:45:36.471329] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:37.187 [2024-11-20 13:45:36.471429] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.187 [2024-11-20 13:45:36.471439] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.187 [2024-11-20 13:45:36.471450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:37.187 [2024-11-20 13:45:36.471459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471478] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:37.187 [2024-11-20 13:45:36.471485] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.187 [2024-11-20 13:45:36.471492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.187 [2024-11-20 13:45:36.471499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.187 [2024-11-20 13:45:36.471507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.187 [2024-11-20 13:45:36.471514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.187 [2024-11-20 13:45:36.471521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:25:37.187 [2024-11-20 13:45:36.471528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.187 [2024-11-20 13:45:36.471614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.187 [2024-11-20 13:45:36.471624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.187 [2024-11-20 13:45:36.471631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:37.187 [2024-11-20 13:45:36.471637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.187 [2024-11-20 13:45:36.471736] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.187 [2024-11-20 13:45:36.471745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.187 [2024-11-20 13:45:36.471753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.187 [2024-11-20 13:45:36.471774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.187 [2024-11-20 13:45:36.471795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.187 [2024-11-20 13:45:36.471808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.187 [2024-11-20 13:45:36.471814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:37.187 [2024-11-20 13:45:36.471821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.187 [2024-11-20 13:45:36.471833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.187 [2024-11-20 13:45:36.471839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:37.187 [2024-11-20 13:45:36.471846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.187 [2024-11-20 13:45:36.471858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.187 [2024-11-20 13:45:36.471877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.187 [2024-11-20 13:45:36.471898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.187 [2024-11-20 13:45:36.471917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.187 [2024-11-20 13:45:36.471935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.187 [2024-11-20 13:45:36.471948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.187 [2024-11-20 13:45:36.471955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:37.187 [2024-11-20 13:45:36.471961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.187 [2024-11-20 13:45:36.471977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.187 [2024-11-20 13:45:36.471984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:37.187 [2024-11-20 13:45:36.471991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.187 [2024-11-20 13:45:36.471997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.187 [2024-11-20 13:45:36.472003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:37.187 [2024-11-20 13:45:36.472010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.472016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.187 [2024-11-20 13:45:36.472023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:37.187 [2024-11-20 13:45:36.472030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.472036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.187 [2024-11-20 13:45:36.472044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.187 [2024-11-20 13:45:36.472051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.187 [2024-11-20 13:45:36.472060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.187 [2024-11-20 13:45:36.472067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.187 [2024-11-20 13:45:36.472074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.187 [2024-11-20 13:45:36.472080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.187 [2024-11-20 13:45:36.472087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.187 [2024-11-20 13:45:36.472093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.187 [2024-11-20 13:45:36.472099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.187 [2024-11-20 13:45:36.472109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.187 [2024-11-20 13:45:36.472117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.187 [2024-11-20 13:45:36.472126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:37.187 [2024-11-20 13:45:36.472133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:37.187 [2024-11-20 13:45:36.472139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:37.187 [2024-11-20 13:45:36.472146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:37.187 [2024-11-20 13:45:36.472153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:37.187 [2024-11-20 13:45:36.472160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:37.187 [2024-11-20 13:45:36.472166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:37.187 [2024-11-20 13:45:36.472173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:37.187 [2024-11-20 13:45:36.472180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:37.187 [2024-11-20 13:45:36.472187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:37.188 [2024-11-20 13:45:36.472221] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.188 [2024-11-20 13:45:36.472229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.188 [2024-11-20 13:45:36.472244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.188 [2024-11-20 13:45:36.472251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.188 [2024-11-20 13:45:36.472258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.188 [2024-11-20 13:45:36.472265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.472272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.188 [2024-11-20 13:45:36.472281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:25:37.188 [2024-11-20 13:45:36.472288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.497709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.497740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.188 [2024-11-20 13:45:36.497751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.359 ms 00:25:37.188 [2024-11-20 13:45:36.497758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.497880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.497893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.188 [2024-11-20 13:45:36.497901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:37.188 [2024-11-20 13:45:36.497908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.546681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.546827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.188 [2024-11-20 13:45:36.546846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.753 ms 00:25:37.188 [2024-11-20 13:45:36.546859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.546963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.546991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.188 [2024-11-20 13:45:36.547000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.188 [2024-11-20 13:45:36.547008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.547302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.547325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.188 [2024-11-20 13:45:36.547333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:25:37.188 [2024-11-20 13:45:36.547347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.547470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.547479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.188 [2024-11-20 13:45:36.547487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:37.188 [2024-11-20 13:45:36.547494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.560563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.560594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.188 [2024-11-20 13:45:36.560604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.048 ms 00:25:37.188 [2024-11-20 13:45:36.560612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.572765] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:37.188 [2024-11-20 13:45:36.572798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:37.188 [2024-11-20 13:45:36.572810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.572819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:37.188 [2024-11-20 13:45:36.572827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.103 ms 00:25:37.188 [2024-11-20 13:45:36.572834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.596819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.188 [2024-11-20 13:45:36.596962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:37.188 [2024-11-20 13:45:36.596990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.908 ms 00:25:37.188 [2024-11-20 13:45:36.596998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.188 [2024-11-20 13:45:36.608430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.608542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:37.446 [2024-11-20 13:45:36.608556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.375 ms 00:25:37.446 [2024-11-20 13:45:36.608563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.619457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.619555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:37.446 [2024-11-20 13:45:36.619639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.835 ms 00:25:37.446 [2024-11-20 13:45:36.619662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.620292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.620379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.446 [2024-11-20 13:45:36.620432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:25:37.446 [2024-11-20 13:45:36.620454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.674394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.674563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:37.446 [2024-11-20 13:45:36.675011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.903 ms 00:25:37.446 [2024-11-20 13:45:36.675055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.685406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:37.446 [2024-11-20 13:45:36.699061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.699187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.446 [2024-11-20 13:45:36.699235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.827 ms 00:25:37.446 [2024-11-20 13:45:36.699262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.699362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.699531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:37.446 [2024-11-20 13:45:36.699564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:37.446 [2024-11-20 13:45:36.699583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.699650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.699788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.446 [2024-11-20 13:45:36.699819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:37.446 [2024-11-20 13:45:36.699838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.699891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.446 [2024-11-20 13:45:36.700243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.446 [2024-11-20 13:45:36.700327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:37.446 [2024-11-20 13:45:36.700382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.446 [2024-11-20 13:45:36.700448] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:37.447 [2024-11-20 13:45:36.700507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.700530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:37.447 [2024-11-20 13:45:36.700618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:37.447 [2024-11-20 13:45:36.700639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.723240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.723347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.447 [2024-11-20 13:45:36.723399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.564 ms 00:25:37.447 [2024-11-20 13:45:36.723422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.723516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.723575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.447 [2024-11-20 13:45:36.723600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:37.447 [2024-11-20 13:45:36.723619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.724457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.447 [2024-11-20 13:45:36.727378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.266 ms, result 0 00:25:37.447 [2024-11-20 13:45:36.728159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.447 [2024-11-20 13:45:36.740905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.447  [2024-11-20T13:45:36.874Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-11-20 13:45:36.838311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.447 [2024-11-20 13:45:36.846926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.847047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:37.447 [2024-11-20 13:45:36.847100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.447 [2024-11-20 13:45:36.847126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.847161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:37.447 [2024-11-20 13:45:36.849759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.849849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:37.447 [2024-11-20 13:45:36.849902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:25:37.447 [2024-11-20 13:45:36.849923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.851672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.851761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:37.447 [2024-11-20 13:45:36.851813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:25:37.447 [2024-11-20 13:45:36.851834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.855818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.855901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:37.447 [2024-11-20 13:45:36.855948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:25:37.447 [2024-11-20 13:45:36.855978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.447 [2024-11-20 13:45:36.862898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.447 [2024-11-20 13:45:36.863000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:37.447 [2024-11-20 13:45:36.863015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:25:37.447 [2024-11-20 13:45:36.863024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.885456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.885561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:37.705 [2024-11-20 13:45:36.885575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.377 ms 00:25:37.705 [2024-11-20 13:45:36.885582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.899319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.899354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:37.705 [2024-11-20 13:45:36.899368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.708 ms 00:25:37.705 [2024-11-20 13:45:36.899377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.899496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.899505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:37.705 [2024-11-20 13:45:36.899513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:37.705 [2024-11-20 13:45:36.899520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.921951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.921993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:37.705 [2024-11-20 13:45:36.922004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.408 ms 00:25:37.705 [2024-11-20 13:45:36.922011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.944331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.944439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:37.705 [2024-11-20 13:45:36.944453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.286 ms 00:25:37.705 [2024-11-20 13:45:36.944460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.966105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.966135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:37.705 [2024-11-20 13:45:36.966145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.614 ms 00:25:37.705 [2024-11-20 13:45:36.966152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.987783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.705 [2024-11-20 13:45:36.987894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:37.705 [2024-11-20 13:45:36.987908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.571 ms 00:25:37.705 [2024-11-20 13:45:36.987915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.705 [2024-11-20 13:45:36.987944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:37.705 [2024-11-20 13:45:36.987959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.987984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.987992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:37.705 [2024-11-20 13:45:36.988155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:37.706 [2024-11-20 13:45:36.988727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:37.706 [2024-11-20 13:45:36.988735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:37.707 [2024-11-20 13:45:36.988743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:37.707 [2024-11-20 13:45:36.988750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:37.707 [2024-11-20 13:45:36.988757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:37.707 [2024-11-20 13:45:36.988765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:37.707 [2024-11-20 13:45:36.988772] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:37.707 [2024-11-20 13:45:36.988783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:37.707 [2024-11-20 13:45:36.988791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:37.707 [2024-11-20 13:45:36.988797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:37.707 [2024-11-20 13:45:36.988803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:37.707 [2024-11-20 13:45:36.988810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.707 [2024-11-20 13:45:36.988820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:37.707 [2024-11-20 13:45:36.988828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:25:37.707 [2024-11-20 13:45:36.988835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.001226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.707 [2024-11-20 13:45:37.001323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:37.707 [2024-11-20 13:45:37.001397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.359 ms 00:25:37.707 [2024-11-20 13:45:37.001419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.001788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.707 [2024-11-20 13:45:37.001815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:37.707 [2024-11-20 13:45:37.001876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:25:37.707 [2024-11-20 13:45:37.001898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.035963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.707 [2024-11-20 13:45:37.036093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.707 [2024-11-20 13:45:37.036167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.707 [2024-11-20 13:45:37.036188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.036278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.707 [2024-11-20 13:45:37.036300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.707 [2024-11-20 13:45:37.036319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.707 [2024-11-20 13:45:37.036373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.036429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.707 [2024-11-20 13:45:37.036452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.707 [2024-11-20 13:45:37.036471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.707 [2024-11-20 13:45:37.036490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.036518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.707 [2024-11-20 13:45:37.036609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.707 [2024-11-20 13:45:37.036628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.707 [2024-11-20 13:45:37.036646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.707 [2024-11-20 13:45:37.110744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.707 [2024-11-20 13:45:37.110892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.707 [2024-11-20 13:45:37.110940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.707 [2024-11-20 13:45:37.110961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.173429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.173578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.965 [2024-11-20 13:45:37.173624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.173645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.173706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.173728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.965 [2024-11-20 13:45:37.173747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.173766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.173803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.173822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.965 [2024-11-20 13:45:37.173846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.173907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.174029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.174054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.965 [2024-11-20 13:45:37.174074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.174193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.174236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.174258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:37.965 [2024-11-20 13:45:37.174323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.174345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.174389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.174411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.965 [2024-11-20 13:45:37.174429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.174448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.174498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.965 [2024-11-20 13:45:37.174582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.965 [2024-11-20 13:45:37.174606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.965 [2024-11-20 13:45:37.174623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.965 [2024-11-20 13:45:37.174761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.813 ms, result 0 00:25:38.530 00:25:38.530 00:25:38.530 13:45:37 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76988 00:25:38.530 13:45:37 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:38.530 13:45:37 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76988 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76988 ']' 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.530 13:45:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:38.530 [2024-11-20 13:45:37.951952] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:38.530 [2024-11-20 13:45:37.952240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76988 ] 00:25:38.788 [2024-11-20 13:45:38.110288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.788 [2024-11-20 13:45:38.207090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.722 13:45:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.722 13:45:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:39.722 13:45:38 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:39.722 [2024-11-20 13:45:39.001051] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.722 [2024-11-20 13:45:39.001114] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.980 [2024-11-20 13:45:39.171028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.171083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:39.980 [2024-11-20 13:45:39.171098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:39.980 [2024-11-20 13:45:39.171107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.173831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.173865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:39.980 [2024-11-20 13:45:39.173877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:25:39.980 [2024-11-20 13:45:39.173886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.173957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:39.980 [2024-11-20 13:45:39.174667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:39.980 [2024-11-20 13:45:39.174810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.174822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:39.980 [2024-11-20 13:45:39.174833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:25:39.980 [2024-11-20 13:45:39.174842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.176008] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:39.980 [2024-11-20 13:45:39.188031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.188066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:39.980 [2024-11-20 13:45:39.188077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.028 ms 00:25:39.980 [2024-11-20 13:45:39.188087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.188173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.188186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:39.980 [2024-11-20 13:45:39.188194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:39.980 [2024-11-20 13:45:39.188203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.192755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.192788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:39.980 [2024-11-20 13:45:39.192797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.505 ms 00:25:39.980 [2024-11-20 13:45:39.192806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.980 [2024-11-20 13:45:39.192904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.980 [2024-11-20 13:45:39.192916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:39.981 [2024-11-20 13:45:39.192924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:39.981 [2024-11-20 13:45:39.192933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.192960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.192993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:39.981 [2024-11-20 13:45:39.193002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:39.981 [2024-11-20 13:45:39.193011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.193032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:39.981 [2024-11-20 13:45:39.196135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.196160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:39.981 [2024-11-20 13:45:39.196170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.105 ms 00:25:39.981 [2024-11-20 13:45:39.196178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.196213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.196221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:39.981 [2024-11-20 13:45:39.196231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:39.981 [2024-11-20 13:45:39.196239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.196260] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:39.981 [2024-11-20 13:45:39.196278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:39.981 [2024-11-20 13:45:39.196317] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:39.981 [2024-11-20 13:45:39.196331] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:39.981 [2024-11-20 13:45:39.196434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:39.981 [2024-11-20 13:45:39.196444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:39.981 [2024-11-20 13:45:39.196460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:39.981 [2024-11-20 13:45:39.196469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196479] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:39.981 [2024-11-20 13:45:39.196496] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:39.981 [2024-11-20 13:45:39.196503] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:39.981 [2024-11-20 13:45:39.196513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:39.981 [2024-11-20 13:45:39.196520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.196530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:39.981 [2024-11-20 13:45:39.196538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:39.981 [2024-11-20 13:45:39.196547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.196641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.196650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:39.981 [2024-11-20 13:45:39.196657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:39.981 [2024-11-20 13:45:39.196665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.196763] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:39.981 [2024-11-20 13:45:39.196774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:39.981 [2024-11-20 13:45:39.196782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:39.981 [2024-11-20 13:45:39.196807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:39.981 [2024-11-20 13:45:39.196833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.981 [2024-11-20 13:45:39.196856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:39.981 [2024-11-20 13:45:39.196865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:39.981 [2024-11-20 13:45:39.196871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.981 [2024-11-20 13:45:39.196879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:39.981 [2024-11-20 13:45:39.196886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:39.981 [2024-11-20 13:45:39.196893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:39.981 [2024-11-20 13:45:39.196908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:39.981 [2024-11-20 13:45:39.196935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:39.981 [2024-11-20 13:45:39.196945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.981 [2024-11-20 13:45:39.196952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:39.981 [2024-11-20 13:45:39.197126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.981 [2024-11-20 13:45:39.197180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:39.981 [2024-11-20 13:45:39.197199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.981 [2024-11-20 13:45:39.197236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:39.981 [2024-11-20 13:45:39.197256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.981 [2024-11-20 13:45:39.197292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:39.981 [2024-11-20 13:45:39.197310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.981 [2024-11-20 13:45:39.197408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:39.981 [2024-11-20 13:45:39.197432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:39.981 [2024-11-20 13:45:39.197451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.981 [2024-11-20 13:45:39.197471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:39.981 [2024-11-20 13:45:39.197490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:39.981 [2024-11-20 13:45:39.197511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:39.981 [2024-11-20 13:45:39.197548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:39.981 [2024-11-20 13:45:39.197566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197627] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:39.981 [2024-11-20 13:45:39.197652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:39.981 [2024-11-20 13:45:39.197672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.981 [2024-11-20 13:45:39.197691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.981 [2024-11-20 13:45:39.197711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:39.981 [2024-11-20 13:45:39.197730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:39.981 [2024-11-20 13:45:39.197749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:39.981 [2024-11-20 13:45:39.197768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:39.981 [2024-11-20 13:45:39.197787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:39.981 [2024-11-20 13:45:39.197805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:39.981 [2024-11-20 13:45:39.197868] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:39.981 [2024-11-20 13:45:39.197902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.197935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:39.981 [2024-11-20 13:45:39.197963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:39.981 [2024-11-20 13:45:39.198008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:39.981 [2024-11-20 13:45:39.198037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:39.981 [2024-11-20 13:45:39.198105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:39.981 [2024-11-20 13:45:39.198135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:39.981 [2024-11-20 13:45:39.198164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:39.981 [2024-11-20 13:45:39.198192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:39.981 [2024-11-20 13:45:39.198222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:39.981 [2024-11-20 13:45:39.198250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:39.981 [2024-11-20 13:45:39.198401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:39.981 [2024-11-20 13:45:39.198410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:39.981 [2024-11-20 13:45:39.198428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:39.981 [2024-11-20 13:45:39.198436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:39.981 [2024-11-20 13:45:39.198444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:39.981 [2024-11-20 13:45:39.198453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.198461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:39.981 [2024-11-20 13:45:39.198470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:25:39.981 [2024-11-20 13:45:39.198477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.223765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.223798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:39.981 [2024-11-20 13:45:39.223810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.197 ms 00:25:39.981 [2024-11-20 13:45:39.223820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.223932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.223941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:39.981 [2024-11-20 13:45:39.223951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:39.981 [2024-11-20 13:45:39.223958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.981 [2024-11-20 13:45:39.254320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.981 [2024-11-20 13:45:39.254354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.982 [2024-11-20 13:45:39.254365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.324 ms 00:25:39.982 [2024-11-20 13:45:39.254372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.254426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.254435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.982 [2024-11-20 13:45:39.254445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:39.982 [2024-11-20 13:45:39.254452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.254752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.254764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.982 [2024-11-20 13:45:39.254776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:39.982 [2024-11-20 13:45:39.254783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.254903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.254911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.982 [2024-11-20 13:45:39.254920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:39.982 [2024-11-20 13:45:39.254928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.268945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.268992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.982 [2024-11-20 13:45:39.269005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.995 ms 00:25:39.982 [2024-11-20 13:45:39.269012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.300684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:39.982 [2024-11-20 13:45:39.300722] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:39.982 [2024-11-20 13:45:39.300736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.300745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:39.982 [2024-11-20 13:45:39.300757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.615 ms 00:25:39.982 [2024-11-20 13:45:39.300764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.324723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.324756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:39.982 [2024-11-20 13:45:39.324769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.888 ms 00:25:39.982 [2024-11-20 13:45:39.324777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.336200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.336323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:39.982 [2024-11-20 13:45:39.336343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.367 ms 00:25:39.982 [2024-11-20 13:45:39.336351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.347367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.347469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:39.982 [2024-11-20 13:45:39.347486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.955 ms 00:25:39.982 [2024-11-20 13:45:39.347493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.348124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.348144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:39.982 [2024-11-20 13:45:39.348155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:25:39.982 [2024-11-20 13:45:39.348162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.982 [2024-11-20 13:45:39.401955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.982 [2024-11-20 13:45:39.402137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:39.982 [2024-11-20 13:45:39.402158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.766 ms 00:25:39.982 [2024-11-20 13:45:39.402167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.412872] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:40.241 [2024-11-20 13:45:39.426417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.426463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:40.241 [2024-11-20 13:45:39.426478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:25:40.241 [2024-11-20 13:45:39.426488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.426574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.426586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:40.241 [2024-11-20 13:45:39.426595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:40.241 [2024-11-20 13:45:39.426604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.426648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.426658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:40.241 [2024-11-20 13:45:39.426666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:40.241 [2024-11-20 13:45:39.426677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.426699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.426708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:40.241 [2024-11-20 13:45:39.426717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:40.241 [2024-11-20 13:45:39.426729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.426759] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:40.241 [2024-11-20 13:45:39.426771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.426778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:40.241 [2024-11-20 13:45:39.426790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:40.241 [2024-11-20 13:45:39.426797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.450424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.450461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:40.241 [2024-11-20 13:45:39.450475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.600 ms 00:25:40.241 [2024-11-20 13:45:39.450484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.450573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.241 [2024-11-20 13:45:39.450584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:40.241 [2024-11-20 13:45:39.450593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:40.241 [2024-11-20 13:45:39.450603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.241 [2024-11-20 13:45:39.451420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:40.241 [2024-11-20 13:45:39.454382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.044 ms, result 0 00:25:40.241 [2024-11-20 13:45:39.455258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:40.241 Some configs were skipped because the RPC state that can call them passed over. 00:25:40.241 13:45:39 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:40.577 [2024-11-20 13:45:39.681288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.577 [2024-11-20 13:45:39.681462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:40.577 [2024-11-20 13:45:39.681518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:25:40.577 [2024-11-20 13:45:39.681544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.577 [2024-11-20 13:45:39.681596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.549 ms, result 0 00:25:40.577 true 00:25:40.577 13:45:39 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:40.577 [2024-11-20 13:45:39.889369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.577 [2024-11-20 13:45:39.889502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:40.577 [2024-11-20 13:45:39.889558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:25:40.577 [2024-11-20 13:45:39.889581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.577 [2024-11-20 13:45:39.889631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.312 ms, result 0 00:25:40.577 true 00:25:40.577 13:45:39 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76988 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76988 ']' 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76988 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76988 00:25:40.577 killing process with pid 76988 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76988' 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76988 00:25:40.577 13:45:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76988 00:25:41.533 [2024-11-20 13:45:40.627313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.627546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.533 [2024-11-20 13:45:40.627565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:41.533 [2024-11-20 13:45:40.627575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.627604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:41.533 [2024-11-20 13:45:40.630180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.630210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.533 [2024-11-20 13:45:40.630225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:25:41.533 [2024-11-20 13:45:40.630233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.630525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.630554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.533 [2024-11-20 13:45:40.630564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:25:41.533 [2024-11-20 13:45:40.630572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.634544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.634572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.533 [2024-11-20 13:45:40.634586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:25:41.533 [2024-11-20 13:45:40.634594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.641569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.641686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.533 [2024-11-20 13:45:40.641704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:25:41.533 [2024-11-20 13:45:40.641712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.650658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.650687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.533 [2024-11-20 13:45:40.650700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.892 ms 00:25:41.533 [2024-11-20 13:45:40.650713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.657602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.657633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.533 [2024-11-20 13:45:40.657646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:25:41.533 [2024-11-20 13:45:40.657654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.657773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.657782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.533 [2024-11-20 13:45:40.657792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:41.533 [2024-11-20 13:45:40.657799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.667486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.667515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.533 [2024-11-20 13:45:40.667525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.666 ms 00:25:41.533 [2024-11-20 13:45:40.667531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.676369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.676483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.533 [2024-11-20 13:45:40.676502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.803 ms 00:25:41.533 [2024-11-20 13:45:40.676509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.685537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.685564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.533 [2024-11-20 13:45:40.685576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.994 ms 00:25:41.533 [2024-11-20 13:45:40.685583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.694387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.533 [2024-11-20 13:45:40.694415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.533 [2024-11-20 13:45:40.694426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.728 ms 00:25:41.533 [2024-11-20 13:45:40.694433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.533 [2024-11-20 13:45:40.694465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.533 [2024-11-20 13:45:40.694479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:41.533 [2024-11-20 13:45:40.694490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.533 [2024-11-20 13:45:40.694498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.533 [2024-11-20 13:45:40.694507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.533 [2024-11-20 13:45:40.694515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.533 [2024-11-20 13:45:40.694525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.694999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.534 [2024-11-20 13:45:40.695176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.535 [2024-11-20 13:45:40.695354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.535 [2024-11-20 13:45:40.695366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:41.535 [2024-11-20 13:45:40.695379] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:41.535 [2024-11-20 13:45:40.695390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:41.535 [2024-11-20 13:45:40.695397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:41.535 [2024-11-20 13:45:40.695406] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:41.535 [2024-11-20 13:45:40.695413] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.535 [2024-11-20 13:45:40.695422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.535 [2024-11-20 13:45:40.695429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.535 [2024-11-20 13:45:40.695436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.535 [2024-11-20 13:45:40.695443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.535 [2024-11-20 13:45:40.695451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.535 [2024-11-20 13:45:40.695458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.535 [2024-11-20 13:45:40.695467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:25:41.535 [2024-11-20 13:45:40.695474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.707495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.535 [2024-11-20 13:45:40.707523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.535 [2024-11-20 13:45:40.707538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.998 ms 00:25:41.535 [2024-11-20 13:45:40.707546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.707909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.535 [2024-11-20 13:45:40.707918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.535 [2024-11-20 13:45:40.707929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:41.535 [2024-11-20 13:45:40.707937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.750733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.750883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.535 [2024-11-20 13:45:40.750902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.750909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.751036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.751047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.535 [2024-11-20 13:45:40.751056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.751066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.751109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.751118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.535 [2024-11-20 13:45:40.751129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.751136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.751155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.751162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.535 [2024-11-20 13:45:40.751171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.751178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.826108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.826156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.535 [2024-11-20 13:45:40.826169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.826176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.888902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.888949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.535 [2024-11-20 13:45:40.888962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.888990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.535 [2024-11-20 13:45:40.889107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.535 [2024-11-20 13:45:40.889159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.535 [2024-11-20 13:45:40.889273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.535 [2024-11-20 13:45:40.889329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.535 [2024-11-20 13:45:40.889394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.535 [2024-11-20 13:45:40.889452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.535 [2024-11-20 13:45:40.889462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.535 [2024-11-20 13:45:40.889468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.535 [2024-11-20 13:45:40.889594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.261 ms, result 0 00:25:42.100 13:45:41 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.359 [2024-11-20 13:45:41.532854] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:42.359 [2024-11-20 13:45:41.533015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77041 ] 00:25:42.359 [2024-11-20 13:45:41.690764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.359 [2024-11-20 13:45:41.772873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.618 [2024-11-20 13:45:41.986164] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.618 [2024-11-20 13:45:41.986366] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.878 [2024-11-20 13:45:42.136687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.136727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.878 [2024-11-20 13:45:42.136738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.878 [2024-11-20 13:45:42.136748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.138919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.138950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.878 [2024-11-20 13:45:42.138958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:25:42.878 [2024-11-20 13:45:42.138964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.139031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.878 [2024-11-20 13:45:42.139600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.878 [2024-11-20 13:45:42.139621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.139627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.878 [2024-11-20 13:45:42.139635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:25:42.878 [2024-11-20 13:45:42.139640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.140932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:42.878 [2024-11-20 13:45:42.150587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.150620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:42.878 [2024-11-20 13:45:42.150631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.657 ms 00:25:42.878 [2024-11-20 13:45:42.150638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.150715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.150724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:42.878 [2024-11-20 13:45:42.150731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:42.878 [2024-11-20 13:45:42.150737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.155066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.155092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.878 [2024-11-20 13:45:42.155100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:25:42.878 [2024-11-20 13:45:42.155106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.155180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.155187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.878 [2024-11-20 13:45:42.155194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:42.878 [2024-11-20 13:45:42.155200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.155219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.155227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.878 [2024-11-20 13:45:42.155233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:42.878 [2024-11-20 13:45:42.155239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.155259] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:42.878 [2024-11-20 13:45:42.157883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.158034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.878 [2024-11-20 13:45:42.158053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:25:42.878 [2024-11-20 13:45:42.158064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.158094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.158101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.878 [2024-11-20 13:45:42.158108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:42.878 [2024-11-20 13:45:42.158114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.158129] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:42.878 [2024-11-20 13:45:42.158146] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:42.878 [2024-11-20 13:45:42.158173] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:42.878 [2024-11-20 13:45:42.158185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:42.878 [2024-11-20 13:45:42.158266] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:42.878 [2024-11-20 13:45:42.158275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.878 [2024-11-20 13:45:42.158283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:42.878 [2024-11-20 13:45:42.158291] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.878 [2024-11-20 13:45:42.158301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.878 [2024-11-20 13:45:42.158307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:42.878 [2024-11-20 13:45:42.158313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.878 [2024-11-20 13:45:42.158319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:42.878 [2024-11-20 13:45:42.158325] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:42.878 [2024-11-20 13:45:42.158331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.158336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.878 [2024-11-20 13:45:42.158342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:25:42.878 [2024-11-20 13:45:42.158347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.158416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.878 [2024-11-20 13:45:42.158425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.878 [2024-11-20 13:45:42.158430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:42.878 [2024-11-20 13:45:42.158436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.878 [2024-11-20 13:45:42.158516] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.878 [2024-11-20 13:45:42.158524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.878 [2024-11-20 13:45:42.158530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.878 [2024-11-20 13:45:42.158536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.878 [2024-11-20 13:45:42.158542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.878 [2024-11-20 13:45:42.158547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.878 [2024-11-20 13:45:42.158553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:42.878 [2024-11-20 13:45:42.158558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.878 [2024-11-20 13:45:42.158565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:42.878 [2024-11-20 13:45:42.158570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.878 [2024-11-20 13:45:42.158575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.878 [2024-11-20 13:45:42.158580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:42.878 [2024-11-20 13:45:42.158585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.878 [2024-11-20 13:45:42.158596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.878 [2024-11-20 13:45:42.158602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:42.878 [2024-11-20 13:45:42.158608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.878 [2024-11-20 13:45:42.158613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.878 [2024-11-20 13:45:42.158618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:42.878 [2024-11-20 13:45:42.158623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.879 [2024-11-20 13:45:42.158633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.879 [2024-11-20 13:45:42.158648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.879 [2024-11-20 13:45:42.158664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.879 [2024-11-20 13:45:42.158679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.879 [2024-11-20 13:45:42.158694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.879 [2024-11-20 13:45:42.158704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.879 [2024-11-20 13:45:42.158709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:42.879 [2024-11-20 13:45:42.158714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.879 [2024-11-20 13:45:42.158719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:42.879 [2024-11-20 13:45:42.158723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:42.879 [2024-11-20 13:45:42.158728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:42.879 [2024-11-20 13:45:42.158738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:42.879 [2024-11-20 13:45:42.158743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158749] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.879 [2024-11-20 13:45:42.158755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.879 [2024-11-20 13:45:42.158760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.879 [2024-11-20 13:45:42.158776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.879 [2024-11-20 13:45:42.158781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.879 [2024-11-20 13:45:42.158786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.879 [2024-11-20 13:45:42.158791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.879 [2024-11-20 13:45:42.158797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.879 [2024-11-20 13:45:42.158802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.879 [2024-11-20 13:45:42.158808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.879 [2024-11-20 13:45:42.158816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:42.879 [2024-11-20 13:45:42.158828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:42.879 [2024-11-20 13:45:42.158834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:42.879 [2024-11-20 13:45:42.158839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:42.879 [2024-11-20 13:45:42.158845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:42.879 [2024-11-20 13:45:42.158851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:42.879 [2024-11-20 13:45:42.158856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:42.879 [2024-11-20 13:45:42.158862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:42.879 [2024-11-20 13:45:42.158867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:42.879 [2024-11-20 13:45:42.158873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:42.879 [2024-11-20 13:45:42.158900] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.879 [2024-11-20 13:45:42.158907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.879 [2024-11-20 13:45:42.158918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.879 [2024-11-20 13:45:42.158924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.879 [2024-11-20 13:45:42.158929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.879 [2024-11-20 13:45:42.158935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.158941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.879 [2024-11-20 13:45:42.158949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:25:42.879 [2024-11-20 13:45:42.158955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.179995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.180021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.879 [2024-11-20 13:45:42.180030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.984 ms 00:25:42.879 [2024-11-20 13:45:42.180036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.180140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.180150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.879 [2024-11-20 13:45:42.180157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:42.879 [2024-11-20 13:45:42.180163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.221144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.221179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.879 [2024-11-20 13:45:42.221190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.963 ms 00:25:42.879 [2024-11-20 13:45:42.221199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.221274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.221283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.879 [2024-11-20 13:45:42.221290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.879 [2024-11-20 13:45:42.221297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.221586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.221598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.879 [2024-11-20 13:45:42.221605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:25:42.879 [2024-11-20 13:45:42.221612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.221721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.221728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.879 [2024-11-20 13:45:42.221735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:42.879 [2024-11-20 13:45:42.221741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.232884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.232913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.879 [2024-11-20 13:45:42.232921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.125 ms 00:25:42.879 [2024-11-20 13:45:42.232928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.242959] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:42.879 [2024-11-20 13:45:42.242995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:42.879 [2024-11-20 13:45:42.243004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.243011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:42.879 [2024-11-20 13:45:42.243019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.961 ms 00:25:42.879 [2024-11-20 13:45:42.243025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.879 [2024-11-20 13:45:42.262441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.879 [2024-11-20 13:45:42.262575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:42.880 [2024-11-20 13:45:42.262592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.350 ms 00:25:42.880 [2024-11-20 13:45:42.262599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.880 [2024-11-20 13:45:42.271895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.880 [2024-11-20 13:45:42.271999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:42.880 [2024-11-20 13:45:42.272055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.228 ms 00:25:42.880 [2024-11-20 13:45:42.272073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.880 [2024-11-20 13:45:42.281074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.880 [2024-11-20 13:45:42.281168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:42.880 [2024-11-20 13:45:42.281212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:25:42.880 [2024-11-20 13:45:42.281229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.880 [2024-11-20 13:45:42.281715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.880 [2024-11-20 13:45:42.281791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.880 [2024-11-20 13:45:42.281834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:25:42.880 [2024-11-20 13:45:42.281851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.325793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.325976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.138 [2024-11-20 13:45:42.326024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.910 ms 00:25:43.138 [2024-11-20 13:45:42.326043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.334299] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:43.138 [2024-11-20 13:45:42.346526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.346634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.138 [2024-11-20 13:45:42.346677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.393 ms 00:25:43.138 [2024-11-20 13:45:42.346699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.346820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.346872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.138 [2024-11-20 13:45:42.346909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:43.138 [2024-11-20 13:45:42.346926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.346987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.347116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.138 [2024-11-20 13:45:42.347143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:43.138 [2024-11-20 13:45:42.347159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.347337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.347440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.138 [2024-11-20 13:45:42.347485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:43.138 [2024-11-20 13:45:42.347537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.347583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.138 [2024-11-20 13:45:42.347629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.347646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.138 [2024-11-20 13:45:42.347663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:43.138 [2024-11-20 13:45:42.347696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.366114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.366210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.138 [2024-11-20 13:45:42.366253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.388 ms 00:25:43.138 [2024-11-20 13:45:42.366271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.366350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.138 [2024-11-20 13:45:42.366450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.138 [2024-11-20 13:45:42.366469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:43.138 [2024-11-20 13:45:42.366484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.138 [2024-11-20 13:45:42.367217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.138 [2024-11-20 13:45:42.369635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.302 ms, result 0 00:25:43.138 [2024-11-20 13:45:42.370166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.138 [2024-11-20 13:45:42.385200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.070  [2024-11-20T13:45:44.482Z] Copying: 48/256 [MB] (48 MBps) [2024-11-20T13:45:45.856Z] Copying: 92/256 [MB] (43 MBps) [2024-11-20T13:45:46.789Z] Copying: 142/256 [MB] (49 MBps) [2024-11-20T13:45:47.721Z] Copying: 186/256 [MB] (43 MBps) [2024-11-20T13:45:48.287Z] Copying: 228/256 [MB] (42 MBps) [2024-11-20T13:45:48.546Z] Copying: 256/256 [MB] (average 45 MBps)[2024-11-20 13:45:48.472156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:49.119 [2024-11-20 13:45:48.479902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.480039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:49.119 [2024-11-20 13:45:48.480094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:49.119 [2024-11-20 13:45:48.480120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.480154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:49.119 [2024-11-20 13:45:48.482285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.482378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:49.119 [2024-11-20 13:45:48.482428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:25:49.119 [2024-11-20 13:45:48.482446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.482690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.482719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:49.119 [2024-11-20 13:45:48.482736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:49.119 [2024-11-20 13:45:48.482861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.485986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.486060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:49.119 [2024-11-20 13:45:48.486105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:25:49.119 [2024-11-20 13:45:48.486123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.491533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.491612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:49.119 [2024-11-20 13:45:48.491748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.385 ms 00:25:49.119 [2024-11-20 13:45:48.491756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.510311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.510404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:49.119 [2024-11-20 13:45:48.510447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.517 ms 00:25:49.119 [2024-11-20 13:45:48.510465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.521741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.521839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:49.119 [2024-11-20 13:45:48.521885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:25:49.119 [2024-11-20 13:45:48.521904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.522030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.522081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:49.119 [2024-11-20 13:45:48.522100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:49.119 [2024-11-20 13:45:48.522115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.119 [2024-11-20 13:45:48.540179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.119 [2024-11-20 13:45:48.540267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:49.119 [2024-11-20 13:45:48.540307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.015 ms 00:25:49.119 [2024-11-20 13:45:48.540324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.378 [2024-11-20 13:45:48.558301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.378 [2024-11-20 13:45:48.558387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:49.378 [2024-11-20 13:45:48.558427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.951 ms 00:25:49.378 [2024-11-20 13:45:48.558444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.378 [2024-11-20 13:45:48.575658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.378 [2024-11-20 13:45:48.575748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:49.378 [2024-11-20 13:45:48.575804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:25:49.378 [2024-11-20 13:45:48.575821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.378 [2024-11-20 13:45:48.594176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.378 [2024-11-20 13:45:48.594288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:49.378 [2024-11-20 13:45:48.594443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.292 ms 00:25:49.378 [2024-11-20 13:45:48.594468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.378 [2024-11-20 13:45:48.594499] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:49.378 [2024-11-20 13:45:48.594520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.594980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.595999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:49.378 [2024-11-20 13:45:48.596594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.596954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:49.379 [2024-11-20 13:45:48.597940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:49.379 [2024-11-20 13:45:48.597955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11e97a26-71f7-4abf-a886-7273f21decce 00:25:49.379 [2024-11-20 13:45:48.597987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:49.379 [2024-11-20 13:45:48.598002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:49.379 [2024-11-20 13:45:48.598009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:49.379 [2024-11-20 13:45:48.598016] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:49.379 [2024-11-20 13:45:48.598022] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:49.379 [2024-11-20 13:45:48.598029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:49.379 [2024-11-20 13:45:48.598035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:49.379 [2024-11-20 13:45:48.598040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:49.379 [2024-11-20 13:45:48.598045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:49.379 [2024-11-20 13:45:48.598051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.379 [2024-11-20 13:45:48.598061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:49.379 [2024-11-20 13:45:48.598068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.553 ms 00:25:49.379 [2024-11-20 13:45:48.598074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.608374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.379 [2024-11-20 13:45:48.608402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:49.379 [2024-11-20 13:45:48.608410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.269 ms 00:25:49.379 [2024-11-20 13:45:48.608416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.608712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.379 [2024-11-20 13:45:48.608719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:49.379 [2024-11-20 13:45:48.608726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:25:49.379 [2024-11-20 13:45:48.608732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.638559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.638601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.379 [2024-11-20 13:45:48.638611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.638616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.638692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.638699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.379 [2024-11-20 13:45:48.638705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.638712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.638753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.638760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.379 [2024-11-20 13:45:48.638766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.638772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.638786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.638795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.379 [2024-11-20 13:45:48.638801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.638806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.701093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.701138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.379 [2024-11-20 13:45:48.701148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.701155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.379 [2024-11-20 13:45:48.751556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.379 [2024-11-20 13:45:48.751623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.379 [2024-11-20 13:45:48.751668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.379 [2024-11-20 13:45:48.751755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:49.379 [2024-11-20 13:45:48.751799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.379 [2024-11-20 13:45:48.751844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.379 [2024-11-20 13:45:48.751850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.379 [2024-11-20 13:45:48.751856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.379 [2024-11-20 13:45:48.751889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.380 [2024-11-20 13:45:48.751896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.380 [2024-11-20 13:45:48.751905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.380 [2024-11-20 13:45:48.751911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.380 [2024-11-20 13:45:48.752035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.118 ms, result 0 00:25:49.946 00:25:49.946 00:25:49.946 13:45:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:50.513 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:50.513 13:45:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76988 00:25:50.513 13:45:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76988 ']' 00:25:50.513 13:45:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76988 00:25:50.513 Process with pid 76988 is not found 00:25:50.513 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76988) - No such process 00:25:50.513 13:45:49 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76988 is not found' 00:25:50.513 00:25:50.513 real 0m50.089s 00:25:50.513 user 1m6.924s 00:25:50.513 sys 0m10.526s 00:25:50.513 ************************************ 00:25:50.513 END TEST ftl_trim 00:25:50.513 ************************************ 00:25:50.513 13:45:49 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.513 13:45:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:50.513 13:45:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:50.513 13:45:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:50.513 13:45:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.513 13:45:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:50.513 ************************************ 00:25:50.513 START TEST ftl_restore 00:25:50.513 ************************************ 00:25:50.513 13:45:49 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:50.513 * Looking for test storage... 00:25:50.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.513 13:45:49 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:50.513 13:45:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:25:50.513 13:45:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:50.772 13:45:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:50.772 13:45:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.772 13:45:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:50.772 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.772 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:50.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.772 --rc genhtml_branch_coverage=1 00:25:50.772 --rc genhtml_function_coverage=1 00:25:50.772 --rc genhtml_legend=1 00:25:50.772 --rc geninfo_all_blocks=1 00:25:50.772 --rc geninfo_unexecuted_blocks=1 00:25:50.772 00:25:50.772 ' 00:25:50.772 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:50.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.772 --rc genhtml_branch_coverage=1 00:25:50.772 --rc genhtml_function_coverage=1 00:25:50.772 --rc genhtml_legend=1 00:25:50.772 --rc geninfo_all_blocks=1 00:25:50.772 --rc geninfo_unexecuted_blocks=1 00:25:50.772 00:25:50.772 ' 00:25:50.772 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:50.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.772 --rc genhtml_branch_coverage=1 00:25:50.772 --rc genhtml_function_coverage=1 00:25:50.772 --rc genhtml_legend=1 00:25:50.772 --rc geninfo_all_blocks=1 00:25:50.772 --rc geninfo_unexecuted_blocks=1 00:25:50.772 00:25:50.772 ' 00:25:50.772 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:50.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.772 --rc genhtml_branch_coverage=1 00:25:50.772 --rc genhtml_function_coverage=1 00:25:50.772 --rc genhtml_legend=1 00:25:50.772 --rc geninfo_all_blocks=1 00:25:50.772 --rc geninfo_unexecuted_blocks=1 00:25:50.772 00:25:50.772 ' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:50.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.yC12b47y69 00:25:50.772 13:45:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77198 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77198 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77198 ']' 00:25:50.773 13:45:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.773 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:50.773 [2024-11-20 13:45:50.102266] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:50.773 [2024-11-20 13:45:50.102388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77198 ] 00:25:51.032 [2024-11-20 13:45:50.260491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.032 [2024-11-20 13:45:50.358446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.619 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.619 13:45:50 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:51.619 13:45:50 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:51.916 13:45:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:51.916 13:45:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:51.916 13:45:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:51.916 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:51.916 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.916 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:51.916 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:51.916 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.174 { 00:25:52.174 "name": "nvme0n1", 00:25:52.174 "aliases": [ 00:25:52.174 "aed47356-6fd5-4a6c-be63-0ecd47b3dd0c" 00:25:52.174 ], 00:25:52.174 "product_name": "NVMe disk", 00:25:52.174 "block_size": 4096, 00:25:52.174 "num_blocks": 1310720, 00:25:52.174 "uuid": "aed47356-6fd5-4a6c-be63-0ecd47b3dd0c", 00:25:52.174 "numa_id": -1, 00:25:52.174 "assigned_rate_limits": { 00:25:52.174 "rw_ios_per_sec": 0, 00:25:52.174 "rw_mbytes_per_sec": 0, 00:25:52.174 "r_mbytes_per_sec": 0, 00:25:52.174 "w_mbytes_per_sec": 0 00:25:52.174 }, 00:25:52.174 "claimed": true, 00:25:52.174 "claim_type": "read_many_write_one", 00:25:52.174 "zoned": false, 00:25:52.174 "supported_io_types": { 00:25:52.174 "read": true, 00:25:52.174 "write": true, 00:25:52.174 "unmap": true, 00:25:52.174 "flush": true, 00:25:52.174 "reset": true, 00:25:52.174 "nvme_admin": true, 00:25:52.174 "nvme_io": true, 00:25:52.174 "nvme_io_md": false, 00:25:52.174 "write_zeroes": true, 00:25:52.174 "zcopy": false, 00:25:52.174 "get_zone_info": false, 00:25:52.174 "zone_management": false, 00:25:52.174 "zone_append": false, 00:25:52.174 "compare": true, 00:25:52.174 "compare_and_write": false, 00:25:52.174 "abort": true, 00:25:52.174 "seek_hole": false, 00:25:52.174 "seek_data": false, 00:25:52.174 "copy": true, 00:25:52.174 "nvme_iov_md": false 00:25:52.174 }, 00:25:52.174 "driver_specific": { 00:25:52.174 "nvme": [ 00:25:52.174 { 00:25:52.174 "pci_address": "0000:00:11.0", 00:25:52.174 "trid": { 00:25:52.174 "trtype": "PCIe", 00:25:52.174 "traddr": "0000:00:11.0" 00:25:52.174 }, 00:25:52.174 "ctrlr_data": { 00:25:52.174 "cntlid": 0, 00:25:52.174 "vendor_id": "0x1b36", 00:25:52.174 "model_number": "QEMU NVMe Ctrl", 00:25:52.174 "serial_number": "12341", 00:25:52.174 "firmware_revision": "8.0.0", 00:25:52.174 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:52.174 "oacs": { 00:25:52.174 "security": 0, 00:25:52.174 "format": 1, 00:25:52.174 "firmware": 0, 00:25:52.174 "ns_manage": 1 00:25:52.174 }, 00:25:52.174 "multi_ctrlr": false, 00:25:52.174 "ana_reporting": false 00:25:52.174 }, 00:25:52.174 "vs": { 00:25:52.174 "nvme_version": "1.4" 00:25:52.174 }, 00:25:52.174 "ns_data": { 00:25:52.174 "id": 1, 00:25:52.174 "can_share": false 00:25:52.174 } 00:25:52.174 } 00:25:52.174 ], 00:25:52.174 "mp_policy": "active_passive" 00:25:52.174 } 00:25:52.174 } 00:25:52.174 ]' 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:52.174 13:45:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:52.174 13:45:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:52.174 13:45:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:52.174 13:45:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:52.174 13:45:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:52.174 13:45:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:52.431 13:45:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=9fe4c930-151c-4e28-9cad-872c0321c2e5 00:25:52.431 13:45:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:52.431 13:45:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fe4c930-151c-4e28-9cad-872c0321c2e5 00:25:52.688 13:45:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:52.946 13:45:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ff93497a-1f17-4896-a075-7aaa70a53096 00:25:52.946 13:45:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff93497a-1f17-4896-a075-7aaa70a53096 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:53.204 13:45:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.204 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.204 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:53.204 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:53.204 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:53.204 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:53.463 { 00:25:53.463 "name": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:53.463 "aliases": [ 00:25:53.463 "lvs/nvme0n1p0" 00:25:53.463 ], 00:25:53.463 "product_name": "Logical Volume", 00:25:53.463 "block_size": 4096, 00:25:53.463 "num_blocks": 26476544, 00:25:53.463 "uuid": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:53.463 "assigned_rate_limits": { 00:25:53.463 "rw_ios_per_sec": 0, 00:25:53.463 "rw_mbytes_per_sec": 0, 00:25:53.463 "r_mbytes_per_sec": 0, 00:25:53.463 "w_mbytes_per_sec": 0 00:25:53.463 }, 00:25:53.463 "claimed": false, 00:25:53.463 "zoned": false, 00:25:53.463 "supported_io_types": { 00:25:53.463 "read": true, 00:25:53.463 "write": true, 00:25:53.463 "unmap": true, 00:25:53.463 "flush": false, 00:25:53.463 "reset": true, 00:25:53.463 "nvme_admin": false, 00:25:53.463 "nvme_io": false, 00:25:53.463 "nvme_io_md": false, 00:25:53.463 "write_zeroes": true, 00:25:53.463 "zcopy": false, 00:25:53.463 "get_zone_info": false, 00:25:53.463 "zone_management": false, 00:25:53.463 "zone_append": false, 00:25:53.463 "compare": false, 00:25:53.463 "compare_and_write": false, 00:25:53.463 "abort": false, 00:25:53.463 "seek_hole": true, 00:25:53.463 "seek_data": true, 00:25:53.463 "copy": false, 00:25:53.463 "nvme_iov_md": false 00:25:53.463 }, 00:25:53.463 "driver_specific": { 00:25:53.463 "lvol": { 00:25:53.463 "lvol_store_uuid": "ff93497a-1f17-4896-a075-7aaa70a53096", 00:25:53.463 "base_bdev": "nvme0n1", 00:25:53.463 "thin_provision": true, 00:25:53.463 "num_allocated_clusters": 0, 00:25:53.463 "snapshot": false, 00:25:53.463 "clone": false, 00:25:53.463 "esnap_clone": false 00:25:53.463 } 00:25:53.463 } 00:25:53.463 } 00:25:53.463 ]' 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:53.463 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:53.463 13:45:52 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:53.463 13:45:52 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:53.463 13:45:52 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:53.721 13:45:52 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:53.721 13:45:52 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:53.721 13:45:52 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.721 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.721 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:53.721 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:53.721 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:53.721 13:45:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:53.979 { 00:25:53.979 "name": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:53.979 "aliases": [ 00:25:53.979 "lvs/nvme0n1p0" 00:25:53.979 ], 00:25:53.979 "product_name": "Logical Volume", 00:25:53.979 "block_size": 4096, 00:25:53.979 "num_blocks": 26476544, 00:25:53.979 "uuid": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:53.979 "assigned_rate_limits": { 00:25:53.979 "rw_ios_per_sec": 0, 00:25:53.979 "rw_mbytes_per_sec": 0, 00:25:53.979 "r_mbytes_per_sec": 0, 00:25:53.979 "w_mbytes_per_sec": 0 00:25:53.979 }, 00:25:53.979 "claimed": false, 00:25:53.979 "zoned": false, 00:25:53.979 "supported_io_types": { 00:25:53.979 "read": true, 00:25:53.979 "write": true, 00:25:53.979 "unmap": true, 00:25:53.979 "flush": false, 00:25:53.979 "reset": true, 00:25:53.979 "nvme_admin": false, 00:25:53.979 "nvme_io": false, 00:25:53.979 "nvme_io_md": false, 00:25:53.979 "write_zeroes": true, 00:25:53.979 "zcopy": false, 00:25:53.979 "get_zone_info": false, 00:25:53.979 "zone_management": false, 00:25:53.979 "zone_append": false, 00:25:53.979 "compare": false, 00:25:53.979 "compare_and_write": false, 00:25:53.979 "abort": false, 00:25:53.979 "seek_hole": true, 00:25:53.979 "seek_data": true, 00:25:53.979 "copy": false, 00:25:53.979 "nvme_iov_md": false 00:25:53.979 }, 00:25:53.979 "driver_specific": { 00:25:53.979 "lvol": { 00:25:53.979 "lvol_store_uuid": "ff93497a-1f17-4896-a075-7aaa70a53096", 00:25:53.979 "base_bdev": "nvme0n1", 00:25:53.979 "thin_provision": true, 00:25:53.979 "num_allocated_clusters": 0, 00:25:53.979 "snapshot": false, 00:25:53.979 "clone": false, 00:25:53.979 "esnap_clone": false 00:25:53.979 } 00:25:53.979 } 00:25:53.979 } 00:25:53.979 ]' 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:53.979 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:53.979 13:45:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:53.979 13:45:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:54.237 13:45:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:54.237 13:45:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:54.237 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:54.237 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:54.237 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:54.237 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:54.237 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e575e0d-25e2-4c74-94c8-9e109c67509d 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:54.495 { 00:25:54.495 "name": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:54.495 "aliases": [ 00:25:54.495 "lvs/nvme0n1p0" 00:25:54.495 ], 00:25:54.495 "product_name": "Logical Volume", 00:25:54.495 "block_size": 4096, 00:25:54.495 "num_blocks": 26476544, 00:25:54.495 "uuid": "6e575e0d-25e2-4c74-94c8-9e109c67509d", 00:25:54.495 "assigned_rate_limits": { 00:25:54.495 "rw_ios_per_sec": 0, 00:25:54.495 "rw_mbytes_per_sec": 0, 00:25:54.495 "r_mbytes_per_sec": 0, 00:25:54.495 "w_mbytes_per_sec": 0 00:25:54.495 }, 00:25:54.495 "claimed": false, 00:25:54.495 "zoned": false, 00:25:54.495 "supported_io_types": { 00:25:54.495 "read": true, 00:25:54.495 "write": true, 00:25:54.495 "unmap": true, 00:25:54.495 "flush": false, 00:25:54.495 "reset": true, 00:25:54.495 "nvme_admin": false, 00:25:54.495 "nvme_io": false, 00:25:54.495 "nvme_io_md": false, 00:25:54.495 "write_zeroes": true, 00:25:54.495 "zcopy": false, 00:25:54.495 "get_zone_info": false, 00:25:54.495 "zone_management": false, 00:25:54.495 "zone_append": false, 00:25:54.495 "compare": false, 00:25:54.495 "compare_and_write": false, 00:25:54.495 "abort": false, 00:25:54.495 "seek_hole": true, 00:25:54.495 "seek_data": true, 00:25:54.495 "copy": false, 00:25:54.495 "nvme_iov_md": false 00:25:54.495 }, 00:25:54.495 "driver_specific": { 00:25:54.495 "lvol": { 00:25:54.495 "lvol_store_uuid": "ff93497a-1f17-4896-a075-7aaa70a53096", 00:25:54.495 "base_bdev": "nvme0n1", 00:25:54.495 "thin_provision": true, 00:25:54.495 "num_allocated_clusters": 0, 00:25:54.495 "snapshot": false, 00:25:54.495 "clone": false, 00:25:54.495 "esnap_clone": false 00:25:54.495 } 00:25:54.495 } 00:25:54.495 } 00:25:54.495 ]' 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:54.495 13:45:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6e575e0d-25e2-4c74-94c8-9e109c67509d --l2p_dram_limit 10' 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:54.495 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:54.495 13:45:53 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e575e0d-25e2-4c74-94c8-9e109c67509d --l2p_dram_limit 10 -c nvc0n1p0 00:25:54.754 [2024-11-20 13:45:53.947063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.947111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:54.754 [2024-11-20 13:45:53.947124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:54.754 [2024-11-20 13:45:53.947131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.947181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.947189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.754 [2024-11-20 13:45:53.947197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:54.754 [2024-11-20 13:45:53.947202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.947221] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:54.754 [2024-11-20 13:45:53.947836] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:54.754 [2024-11-20 13:45:53.947858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.947865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.754 [2024-11-20 13:45:53.947873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:25:54.754 [2024-11-20 13:45:53.947878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.947907] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d59d0621-983a-490d-b4c2-bda30131d214 00:25:54.754 [2024-11-20 13:45:53.948890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.948920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:54.754 [2024-11-20 13:45:53.948928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:54.754 [2024-11-20 13:45:53.948937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.953660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.953692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.754 [2024-11-20 13:45:53.953700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:25:54.754 [2024-11-20 13:45:53.953708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.953781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.953789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.754 [2024-11-20 13:45:53.953796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:54.754 [2024-11-20 13:45:53.953806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.953840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.953848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:54.754 [2024-11-20 13:45:53.953855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:54.754 [2024-11-20 13:45:53.953864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.953881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:54.754 [2024-11-20 13:45:53.956839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.956873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.754 [2024-11-20 13:45:53.956883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:25:54.754 [2024-11-20 13:45:53.956890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.956918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.956925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:54.754 [2024-11-20 13:45:53.956933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:54.754 [2024-11-20 13:45:53.956939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.956966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:54.754 [2024-11-20 13:45:53.957090] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:54.754 [2024-11-20 13:45:53.957107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:54.754 [2024-11-20 13:45:53.957115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:54.754 [2024-11-20 13:45:53.957125] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:54.754 [2024-11-20 13:45:53.957132] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:54.754 [2024-11-20 13:45:53.957140] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:54.754 [2024-11-20 13:45:53.957146] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:54.754 [2024-11-20 13:45:53.957155] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:54.754 [2024-11-20 13:45:53.957161] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:54.754 [2024-11-20 13:45:53.957168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.957174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:54.754 [2024-11-20 13:45:53.957182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:25:54.754 [2024-11-20 13:45:53.957192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.957262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.754 [2024-11-20 13:45:53.957271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:54.754 [2024-11-20 13:45:53.957279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:54.754 [2024-11-20 13:45:53.957284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.754 [2024-11-20 13:45:53.957368] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:54.754 [2024-11-20 13:45:53.957376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:54.754 [2024-11-20 13:45:53.957384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:54.755 [2024-11-20 13:45:53.957403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:54.755 [2024-11-20 13:45:53.957421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.755 [2024-11-20 13:45:53.957433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:54.755 [2024-11-20 13:45:53.957438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:54.755 [2024-11-20 13:45:53.957444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.755 [2024-11-20 13:45:53.957449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:54.755 [2024-11-20 13:45:53.957456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:54.755 [2024-11-20 13:45:53.957461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:54.755 [2024-11-20 13:45:53.957473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:54.755 [2024-11-20 13:45:53.957493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:54.755 [2024-11-20 13:45:53.957509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:54.755 [2024-11-20 13:45:53.957527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:54.755 [2024-11-20 13:45:53.957543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:54.755 [2024-11-20 13:45:53.957562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.755 [2024-11-20 13:45:53.957574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:54.755 [2024-11-20 13:45:53.957579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:54.755 [2024-11-20 13:45:53.957586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.755 [2024-11-20 13:45:53.957591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:54.755 [2024-11-20 13:45:53.957598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:54.755 [2024-11-20 13:45:53.957603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:54.755 [2024-11-20 13:45:53.957614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:54.755 [2024-11-20 13:45:53.957621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:54.755 [2024-11-20 13:45:53.957632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:54.755 [2024-11-20 13:45:53.957638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.755 [2024-11-20 13:45:53.957652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:54.755 [2024-11-20 13:45:53.957659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:54.755 [2024-11-20 13:45:53.957664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:54.755 [2024-11-20 13:45:53.957671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:54.755 [2024-11-20 13:45:53.957677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:54.755 [2024-11-20 13:45:53.957683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:54.755 [2024-11-20 13:45:53.957690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:54.755 [2024-11-20 13:45:53.957698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:54.755 [2024-11-20 13:45:53.957713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:54.755 [2024-11-20 13:45:53.957719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:54.755 [2024-11-20 13:45:53.957725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:54.755 [2024-11-20 13:45:53.957731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:54.755 [2024-11-20 13:45:53.957737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:54.755 [2024-11-20 13:45:53.957743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:54.755 [2024-11-20 13:45:53.957749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:54.755 [2024-11-20 13:45:53.957755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:54.755 [2024-11-20 13:45:53.957764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:54.755 [2024-11-20 13:45:53.957796] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:54.755 [2024-11-20 13:45:53.957803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:54.755 [2024-11-20 13:45:53.957817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:54.755 [2024-11-20 13:45:53.957823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:54.755 [2024-11-20 13:45:53.957830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:54.755 [2024-11-20 13:45:53.957835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.755 [2024-11-20 13:45:53.957842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:54.755 [2024-11-20 13:45:53.957848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:25:54.755 [2024-11-20 13:45:53.957855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.755 [2024-11-20 13:45:53.957897] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:54.755 [2024-11-20 13:45:53.957908] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:56.654 [2024-11-20 13:45:56.064220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.654 [2024-11-20 13:45:56.064283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:56.655 [2024-11-20 13:45:56.064298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2106.313 ms 00:25:56.655 [2024-11-20 13:45:56.064309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.912 [2024-11-20 13:45:56.089644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.089692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.913 [2024-11-20 13:45:56.089704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.133 ms 00:25:56.913 [2024-11-20 13:45:56.089715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.089848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.089861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.913 [2024-11-20 13:45:56.089869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:56.913 [2024-11-20 13:45:56.089883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.119863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.119907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.913 [2024-11-20 13:45:56.119918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.933 ms 00:25:56.913 [2024-11-20 13:45:56.119928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.119961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.119987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.913 [2024-11-20 13:45:56.119995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:56.913 [2024-11-20 13:45:56.120004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.120361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.120382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.913 [2024-11-20 13:45:56.120391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:25:56.913 [2024-11-20 13:45:56.120400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.120505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.120515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.913 [2024-11-20 13:45:56.120525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:56.913 [2024-11-20 13:45:56.120536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.134259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.134426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.913 [2024-11-20 13:45:56.134443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.705 ms 00:25:56.913 [2024-11-20 13:45:56.134452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.158508] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:56.913 [2024-11-20 13:45:56.161912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.161943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.913 [2024-11-20 13:45:56.161959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.374 ms 00:25:56.913 [2024-11-20 13:45:56.161981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.216780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.217005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:56.913 [2024-11-20 13:45:56.217029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.749 ms 00:25:56.913 [2024-11-20 13:45:56.217038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.217216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.217230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.913 [2024-11-20 13:45:56.217243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:25:56.913 [2024-11-20 13:45:56.217251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.240326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.240498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:56.913 [2024-11-20 13:45:56.240521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:25:56.913 [2024-11-20 13:45:56.240530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.263148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.263274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:56.913 [2024-11-20 13:45:56.263294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.559 ms 00:25:56.913 [2024-11-20 13:45:56.263302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.263855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.263865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.913 [2024-11-20 13:45:56.263876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:25:56.913 [2024-11-20 13:45:56.263886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.913 [2024-11-20 13:45:56.333890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.913 [2024-11-20 13:45:56.333943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:56.913 [2024-11-20 13:45:56.333963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.970 ms 00:25:56.913 [2024-11-20 13:45:56.333983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.357909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.172 [2024-11-20 13:45:56.357951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:57.172 [2024-11-20 13:45:56.357966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.851 ms 00:25:57.172 [2024-11-20 13:45:56.357988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.380889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.172 [2024-11-20 13:45:56.380947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:57.172 [2024-11-20 13:45:56.380962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.859 ms 00:25:57.172 [2024-11-20 13:45:56.380985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.404066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.172 [2024-11-20 13:45:56.404226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.172 [2024-11-20 13:45:56.404248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:25:57.172 [2024-11-20 13:45:56.404256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.404296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.172 [2024-11-20 13:45:56.404306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.172 [2024-11-20 13:45:56.404318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:57.172 [2024-11-20 13:45:56.404325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.404418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.172 [2024-11-20 13:45:56.404428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.172 [2024-11-20 13:45:56.404440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:57.172 [2024-11-20 13:45:56.404448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.172 [2024-11-20 13:45:56.405687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2458.208 ms, result 0 00:25:57.172 { 00:25:57.172 "name": "ftl0", 00:25:57.172 "uuid": "d59d0621-983a-490d-b4c2-bda30131d214" 00:25:57.172 } 00:25:57.172 13:45:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:57.172 13:45:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:57.430 13:45:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:57.430 13:45:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:57.430 [2024-11-20 13:45:56.812895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.812944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.430 [2024-11-20 13:45:56.812957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.430 [2024-11-20 13:45:56.812982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.430 [2024-11-20 13:45:56.813007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.430 [2024-11-20 13:45:56.815589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.815736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.430 [2024-11-20 13:45:56.815756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:25:57.430 [2024-11-20 13:45:56.815764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.430 [2024-11-20 13:45:56.816059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.816077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.430 [2024-11-20 13:45:56.816088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:57.430 [2024-11-20 13:45:56.816095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.430 [2024-11-20 13:45:56.819342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.819365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.430 [2024-11-20 13:45:56.819376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:25:57.430 [2024-11-20 13:45:56.819385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.430 [2024-11-20 13:45:56.825528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.825557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.430 [2024-11-20 13:45:56.825571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:25:57.430 [2024-11-20 13:45:56.825580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.430 [2024-11-20 13:45:56.848493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.430 [2024-11-20 13:45:56.848540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.430 [2024-11-20 13:45:56.848557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:25:57.430 [2024-11-20 13:45:56.848564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.863412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.863450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.689 [2024-11-20 13:45:56.863466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.799 ms 00:25:57.689 [2024-11-20 13:45:56.863476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.863628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.863638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.689 [2024-11-20 13:45:56.863649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:57.689 [2024-11-20 13:45:56.863657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.886387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.886531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.689 [2024-11-20 13:45:56.886552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.709 ms 00:25:57.689 [2024-11-20 13:45:56.886560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.908856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.908888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.689 [2024-11-20 13:45:56.908902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.259 ms 00:25:57.689 [2024-11-20 13:45:56.908909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.930413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.930447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.689 [2024-11-20 13:45:56.930460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.462 ms 00:25:57.689 [2024-11-20 13:45:56.930467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.952522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.689 [2024-11-20 13:45:56.952554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.689 [2024-11-20 13:45:56.952566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.978 ms 00:25:57.689 [2024-11-20 13:45:56.952574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.689 [2024-11-20 13:45:56.952609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.689 [2024-11-20 13:45:56.952624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.689 [2024-11-20 13:45:56.952882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.952999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.690 [2024-11-20 13:45:56.953556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.690 [2024-11-20 13:45:56.953568] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d59d0621-983a-490d-b4c2-bda30131d214 00:25:57.690 [2024-11-20 13:45:56.953576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:57.690 [2024-11-20 13:45:56.953586] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.690 [2024-11-20 13:45:56.953593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.690 [2024-11-20 13:45:56.953604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.690 [2024-11-20 13:45:56.953611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.690 [2024-11-20 13:45:56.953619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.690 [2024-11-20 13:45:56.953626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.690 [2024-11-20 13:45:56.953634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.690 [2024-11-20 13:45:56.953640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.690 [2024-11-20 13:45:56.953649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.690 [2024-11-20 13:45:56.953656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.690 [2024-11-20 13:45:56.953665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:25:57.690 [2024-11-20 13:45:56.953672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.690 [2024-11-20 13:45:56.966155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.690 [2024-11-20 13:45:56.966194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.690 [2024-11-20 13:45:56.966207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.439 ms 00:25:57.690 [2024-11-20 13:45:56.966215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.690 [2024-11-20 13:45:56.966561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.690 [2024-11-20 13:45:56.966570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.690 [2024-11-20 13:45:56.966582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:25:57.691 [2024-11-20 13:45:56.966589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.691 [2024-11-20 13:45:57.007796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.691 [2024-11-20 13:45:57.007840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.691 [2024-11-20 13:45:57.007853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.691 [2024-11-20 13:45:57.007861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.691 [2024-11-20 13:45:57.007926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.691 [2024-11-20 13:45:57.007934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.691 [2024-11-20 13:45:57.007945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.691 [2024-11-20 13:45:57.007952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.691 [2024-11-20 13:45:57.008053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.691 [2024-11-20 13:45:57.008064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.691 [2024-11-20 13:45:57.008074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.691 [2024-11-20 13:45:57.008081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.691 [2024-11-20 13:45:57.008101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.691 [2024-11-20 13:45:57.008109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.691 [2024-11-20 13:45:57.008117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.691 [2024-11-20 13:45:57.008124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.691 [2024-11-20 13:45:57.083375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.691 [2024-11-20 13:45:57.083570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.691 [2024-11-20 13:45:57.083593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.691 [2024-11-20 13:45:57.083602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.145569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.145616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.950 [2024-11-20 13:45:57.145629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.145639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.145726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.145736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.950 [2024-11-20 13:45:57.145746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.145753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.145800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.145809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.950 [2024-11-20 13:45:57.145818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.145826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.145917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.145926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.950 [2024-11-20 13:45:57.145935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.145942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.146001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.146011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.950 [2024-11-20 13:45:57.146021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.146047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.146085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.146094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.950 [2024-11-20 13:45:57.146103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.146110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.146155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.950 [2024-11-20 13:45:57.146165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.950 [2024-11-20 13:45:57.146174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.950 [2024-11-20 13:45:57.146181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.950 [2024-11-20 13:45:57.146304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.379 ms, result 0 00:25:57.950 true 00:25:57.950 13:45:57 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77198 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77198 ']' 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77198 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77198 00:25:57.950 killing process with pid 77198 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77198' 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77198 00:25:57.950 13:45:57 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77198 00:26:07.923 13:46:05 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:10.538 262144+0 records in 00:26:10.538 262144+0 records out 00:26:10.538 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.30307 s, 250 MB/s 00:26:10.538 13:46:09 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:13.165 13:46:11 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:13.165 [2024-11-20 13:46:12.054202] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:26:13.165 [2024-11-20 13:46:12.054380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77423 ] 00:26:13.165 [2024-11-20 13:46:12.231835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.165 [2024-11-20 13:46:12.331181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.165 [2024-11-20 13:46:12.586845] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.165 [2024-11-20 13:46:12.587086] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.425 [2024-11-20 13:46:12.739960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.740030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:13.425 [2024-11-20 13:46:12.740047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.425 [2024-11-20 13:46:12.740055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.425 [2024-11-20 13:46:12.740108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.740118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.425 [2024-11-20 13:46:12.740128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:13.425 [2024-11-20 13:46:12.740136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.425 [2024-11-20 13:46:12.740155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:13.425 [2024-11-20 13:46:12.740910] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:13.425 [2024-11-20 13:46:12.740927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.740935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.425 [2024-11-20 13:46:12.740944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:26:13.425 [2024-11-20 13:46:12.740952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.425 [2024-11-20 13:46:12.742493] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:13.425 [2024-11-20 13:46:12.754680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.754719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:13.425 [2024-11-20 13:46:12.754734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.189 ms 00:26:13.425 [2024-11-20 13:46:12.754743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.425 [2024-11-20 13:46:12.754798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.754808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:13.425 [2024-11-20 13:46:12.754816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:13.425 [2024-11-20 13:46:12.754823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.425 [2024-11-20 13:46:12.759570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.425 [2024-11-20 13:46:12.759610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.425 [2024-11-20 13:46:12.759620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:26:13.426 [2024-11-20 13:46:12.759631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.759722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.759731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.426 [2024-11-20 13:46:12.759740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:13.426 [2024-11-20 13:46:12.759747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.759791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.759801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:13.426 [2024-11-20 13:46:12.759808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:13.426 [2024-11-20 13:46:12.759816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.759841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:13.426 [2024-11-20 13:46:12.763126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.763149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.426 [2024-11-20 13:46:12.763158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:26:13.426 [2024-11-20 13:46:12.763168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.763196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.763204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:13.426 [2024-11-20 13:46:12.763212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:13.426 [2024-11-20 13:46:12.763219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.763238] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:13.426 [2024-11-20 13:46:12.763255] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:13.426 [2024-11-20 13:46:12.763288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:13.426 [2024-11-20 13:46:12.763306] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:13.426 [2024-11-20 13:46:12.763412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:13.426 [2024-11-20 13:46:12.763422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:13.426 [2024-11-20 13:46:12.763432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:13.426 [2024-11-20 13:46:12.763442] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763459] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:13.426 [2024-11-20 13:46:12.763466] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:13.426 [2024-11-20 13:46:12.763473] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:13.426 [2024-11-20 13:46:12.763483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:13.426 [2024-11-20 13:46:12.763490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.763498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:13.426 [2024-11-20 13:46:12.763505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:26:13.426 [2024-11-20 13:46:12.763512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.763593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.426 [2024-11-20 13:46:12.763601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:13.426 [2024-11-20 13:46:12.763609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:13.426 [2024-11-20 13:46:12.763615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.426 [2024-11-20 13:46:12.763718] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:13.426 [2024-11-20 13:46:12.763728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:13.426 [2024-11-20 13:46:12.763736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:13.426 [2024-11-20 13:46:12.763758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:13.426 [2024-11-20 13:46:12.763778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.426 [2024-11-20 13:46:12.763791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:13.426 [2024-11-20 13:46:12.763798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:13.426 [2024-11-20 13:46:12.763804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.426 [2024-11-20 13:46:12.763810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:13.426 [2024-11-20 13:46:12.763817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:13.426 [2024-11-20 13:46:12.763828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:13.426 [2024-11-20 13:46:12.763841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:13.426 [2024-11-20 13:46:12.763863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:13.426 [2024-11-20 13:46:12.763882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:13.426 [2024-11-20 13:46:12.763901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:13.426 [2024-11-20 13:46:12.763919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.426 [2024-11-20 13:46:12.763932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:13.426 [2024-11-20 13:46:12.763938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:13.426 [2024-11-20 13:46:12.763944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.426 [2024-11-20 13:46:12.763950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:13.426 [2024-11-20 13:46:12.763957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:13.426 [2024-11-20 13:46:12.763963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.426 [2024-11-20 13:46:12.763989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:13.426 [2024-11-20 13:46:12.763996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:13.426 [2024-11-20 13:46:12.764003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.764009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:13.426 [2024-11-20 13:46:12.764016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:13.426 [2024-11-20 13:46:12.764022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.764028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:13.426 [2024-11-20 13:46:12.764035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:13.426 [2024-11-20 13:46:12.764043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.426 [2024-11-20 13:46:12.764049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.426 [2024-11-20 13:46:12.764057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:13.426 [2024-11-20 13:46:12.764063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:13.426 [2024-11-20 13:46:12.764070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:13.426 [2024-11-20 13:46:12.764077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:13.426 [2024-11-20 13:46:12.764084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:13.426 [2024-11-20 13:46:12.764092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:13.426 [2024-11-20 13:46:12.764101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:13.426 [2024-11-20 13:46:12.764110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.426 [2024-11-20 13:46:12.764118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:13.426 [2024-11-20 13:46:12.764125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:13.426 [2024-11-20 13:46:12.764132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:13.426 [2024-11-20 13:46:12.764139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:13.426 [2024-11-20 13:46:12.764146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:13.426 [2024-11-20 13:46:12.764153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:13.426 [2024-11-20 13:46:12.764160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:13.427 [2024-11-20 13:46:12.764167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:13.427 [2024-11-20 13:46:12.764174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:13.427 [2024-11-20 13:46:12.764181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:13.427 [2024-11-20 13:46:12.764216] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:13.427 [2024-11-20 13:46:12.764226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.427 [2024-11-20 13:46:12.764241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:13.427 [2024-11-20 13:46:12.764248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:13.427 [2024-11-20 13:46:12.764255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:13.427 [2024-11-20 13:46:12.764262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.764269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:13.427 [2024-11-20 13:46:12.764276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:26:13.427 [2024-11-20 13:46:12.764283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.789650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.789684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.427 [2024-11-20 13:46:12.789695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.314 ms 00:26:13.427 [2024-11-20 13:46:12.789703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.789799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.789812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:13.427 [2024-11-20 13:46:12.789820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:13.427 [2024-11-20 13:46:12.789827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.832797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.832846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.427 [2024-11-20 13:46:12.832859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.913 ms 00:26:13.427 [2024-11-20 13:46:12.832867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.832922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.832932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.427 [2024-11-20 13:46:12.832944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.427 [2024-11-20 13:46:12.832951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.833318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.833341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.427 [2024-11-20 13:46:12.833351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:13.427 [2024-11-20 13:46:12.833359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.833483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.833497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.427 [2024-11-20 13:46:12.833505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:13.427 [2024-11-20 13:46:12.833520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.427 [2024-11-20 13:46:12.846299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.427 [2024-11-20 13:46:12.846328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.427 [2024-11-20 13:46:12.846340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.761 ms 00:26:13.427 [2024-11-20 13:46:12.846348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.685 [2024-11-20 13:46:12.858643] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:13.686 [2024-11-20 13:46:12.858676] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:13.686 [2024-11-20 13:46:12.858687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.858695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:13.686 [2024-11-20 13:46:12.858704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.240 ms 00:26:13.686 [2024-11-20 13:46:12.858711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.885209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.885252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:13.686 [2024-11-20 13:46:12.885265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:26:13.686 [2024-11-20 13:46:12.885274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.897453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.897500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:13.686 [2024-11-20 13:46:12.897513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.119 ms 00:26:13.686 [2024-11-20 13:46:12.897521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.908766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.908797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:13.686 [2024-11-20 13:46:12.908808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.200 ms 00:26:13.686 [2024-11-20 13:46:12.908815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.909458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.909482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:13.686 [2024-11-20 13:46:12.909492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:26:13.686 [2024-11-20 13:46:12.909499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.962791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.962836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:13.686 [2024-11-20 13:46:12.962849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.272 ms 00:26:13.686 [2024-11-20 13:46:12.962860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.973039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:13.686 [2024-11-20 13:46:12.975501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.975526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:13.686 [2024-11-20 13:46:12.975537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.594 ms 00:26:13.686 [2024-11-20 13:46:12.975546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.975643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.975653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:13.686 [2024-11-20 13:46:12.975662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:13.686 [2024-11-20 13:46:12.975670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.975733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.975743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:13.686 [2024-11-20 13:46:12.975751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:13.686 [2024-11-20 13:46:12.975758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.975776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.975784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:13.686 [2024-11-20 13:46:12.975791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.686 [2024-11-20 13:46:12.975798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.975826] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:13.686 [2024-11-20 13:46:12.975835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.975844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:13.686 [2024-11-20 13:46:12.975852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:13.686 [2024-11-20 13:46:12.975859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.998596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.998629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:13.686 [2024-11-20 13:46:12.998640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.721 ms 00:26:13.686 [2024-11-20 13:46:12.998649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.998720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.686 [2024-11-20 13:46:12.998729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:13.686 [2024-11-20 13:46:12.998737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:13.686 [2024-11-20 13:46:12.998745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.686 [2024-11-20 13:46:12.999636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.272 ms, result 0 00:26:14.626  [2024-11-20T13:46:15.431Z] Copying: 37/1024 [MB] (37 MBps) [2024-11-20T13:46:16.365Z] Copying: 64/1024 [MB] (27 MBps) [2024-11-20T13:46:17.298Z] Copying: 103/1024 [MB] (38 MBps) [2024-11-20T13:46:18.230Z] Copying: 136/1024 [MB] (32 MBps) [2024-11-20T13:46:19.163Z] Copying: 176/1024 [MB] (40 MBps) [2024-11-20T13:46:20.093Z] Copying: 221/1024 [MB] (44 MBps) [2024-11-20T13:46:21.026Z] Copying: 266/1024 [MB] (45 MBps) [2024-11-20T13:46:22.452Z] Copying: 311/1024 [MB] (45 MBps) [2024-11-20T13:46:23.042Z] Copying: 355/1024 [MB] (44 MBps) [2024-11-20T13:46:24.416Z] Copying: 399/1024 [MB] (43 MBps) [2024-11-20T13:46:25.349Z] Copying: 445/1024 [MB] (45 MBps) [2024-11-20T13:46:26.308Z] Copying: 490/1024 [MB] (45 MBps) [2024-11-20T13:46:27.263Z] Copying: 536/1024 [MB] (45 MBps) [2024-11-20T13:46:28.196Z] Copying: 578/1024 [MB] (42 MBps) [2024-11-20T13:46:29.131Z] Copying: 624/1024 [MB] (45 MBps) [2024-11-20T13:46:30.062Z] Copying: 668/1024 [MB] (44 MBps) [2024-11-20T13:46:31.436Z] Copying: 710/1024 [MB] (41 MBps) [2024-11-20T13:46:32.371Z] Copying: 753/1024 [MB] (42 MBps) [2024-11-20T13:46:33.303Z] Copying: 797/1024 [MB] (43 MBps) [2024-11-20T13:46:34.237Z] Copying: 838/1024 [MB] (41 MBps) [2024-11-20T13:46:35.169Z] Copying: 880/1024 [MB] (41 MBps) [2024-11-20T13:46:36.101Z] Copying: 925/1024 [MB] (45 MBps) [2024-11-20T13:46:37.034Z] Copying: 970/1024 [MB] (44 MBps) [2024-11-20T13:46:37.293Z] Copying: 1015/1024 [MB] (45 MBps) [2024-11-20T13:46:37.293Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-20 13:46:37.194606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.194657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:37.866 [2024-11-20 13:46:37.194670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:37.866 [2024-11-20 13:46:37.194678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.194699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:37.866 [2024-11-20 13:46:37.197297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.197329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:37.866 [2024-11-20 13:46:37.197341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:26:37.866 [2024-11-20 13:46:37.197354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.198749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.198781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:37.866 [2024-11-20 13:46:37.198790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:26:37.866 [2024-11-20 13:46:37.198798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.210993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.211025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:37.866 [2024-11-20 13:46:37.211035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.181 ms 00:26:37.866 [2024-11-20 13:46:37.211042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.217172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.217198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:37.866 [2024-11-20 13:46:37.217207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:26:37.866 [2024-11-20 13:46:37.217214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.240035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.240079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:37.866 [2024-11-20 13:46:37.240093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.774 ms 00:26:37.866 [2024-11-20 13:46:37.240100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.254131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.254168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:37.866 [2024-11-20 13:46:37.254181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.997 ms 00:26:37.866 [2024-11-20 13:46:37.254190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.254316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.254326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:37.866 [2024-11-20 13:46:37.254340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:37.866 [2024-11-20 13:46:37.254348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.866 [2024-11-20 13:46:37.277379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.866 [2024-11-20 13:46:37.277412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:37.866 [2024-11-20 13:46:37.277424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.016 ms 00:26:37.866 [2024-11-20 13:46:37.277433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.143 [2024-11-20 13:46:37.299573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.143 [2024-11-20 13:46:37.299615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:38.143 [2024-11-20 13:46:37.299634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.107 ms 00:26:38.143 [2024-11-20 13:46:37.299641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.143 [2024-11-20 13:46:37.321704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.143 [2024-11-20 13:46:37.321738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:38.143 [2024-11-20 13:46:37.321748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.027 ms 00:26:38.143 [2024-11-20 13:46:37.321755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.143 [2024-11-20 13:46:37.344080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.143 [2024-11-20 13:46:37.344114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:38.143 [2024-11-20 13:46:37.344124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.271 ms 00:26:38.143 [2024-11-20 13:46:37.344131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.143 [2024-11-20 13:46:37.344163] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:38.143 [2024-11-20 13:46:37.344177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:38.143 [2024-11-20 13:46:37.344235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:38.144 [2024-11-20 13:46:37.344933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:38.144 [2024-11-20 13:46:37.344946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d59d0621-983a-490d-b4c2-bda30131d214 00:26:38.144 [2024-11-20 13:46:37.344956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:38.144 [2024-11-20 13:46:37.344963] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:38.144 [2024-11-20 13:46:37.344980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:38.144 [2024-11-20 13:46:37.344988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:38.144 [2024-11-20 13:46:37.344996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:38.144 [2024-11-20 13:46:37.345003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:38.144 [2024-11-20 13:46:37.345011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:38.144 [2024-11-20 13:46:37.345024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:38.144 [2024-11-20 13:46:37.345030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:38.144 [2024-11-20 13:46:37.345037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.144 [2024-11-20 13:46:37.345045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:38.144 [2024-11-20 13:46:37.345053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:26:38.144 [2024-11-20 13:46:37.345060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.144 [2024-11-20 13:46:37.357389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.144 [2024-11-20 13:46:37.357420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:38.144 [2024-11-20 13:46:37.357431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.313 ms 00:26:38.144 [2024-11-20 13:46:37.357444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.144 [2024-11-20 13:46:37.357779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.144 [2024-11-20 13:46:37.357801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:38.144 [2024-11-20 13:46:37.357810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:26:38.144 [2024-11-20 13:46:37.357817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.144 [2024-11-20 13:46:37.390028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.144 [2024-11-20 13:46:37.390068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.144 [2024-11-20 13:46:37.390079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.144 [2024-11-20 13:46:37.390087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.390146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.390154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.145 [2024-11-20 13:46:37.390162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.390169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.390246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.390256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.145 [2024-11-20 13:46:37.390264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.390271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.390286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.390293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.145 [2024-11-20 13:46:37.390300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.390307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.467019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.467069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.145 [2024-11-20 13:46:37.467080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.467087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.145 [2024-11-20 13:46:37.530226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.145 [2024-11-20 13:46:37.530304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.145 [2024-11-20 13:46:37.530377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.145 [2024-11-20 13:46:37.530488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:38.145 [2024-11-20 13:46:37.530538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.145 [2024-11-20 13:46:37.530595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.145 [2024-11-20 13:46:37.530647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.145 [2024-11-20 13:46:37.530655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.145 [2024-11-20 13:46:37.530662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.145 [2024-11-20 13:46:37.530765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.134 ms, result 0 00:26:40.676 00:26:40.676 00:26:40.676 13:46:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:40.676 [2024-11-20 13:46:39.710413] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:26:40.676 [2024-11-20 13:46:39.710534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77698 ] 00:26:40.676 [2024-11-20 13:46:39.873897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.676 [2024-11-20 13:46:39.981939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.932 [2024-11-20 13:46:40.238960] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.932 [2024-11-20 13:46:40.239060] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.192 [2024-11-20 13:46:40.391854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.391915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:41.192 [2024-11-20 13:46:40.391931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.192 [2024-11-20 13:46:40.391939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.392007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.392019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.192 [2024-11-20 13:46:40.392030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:41.192 [2024-11-20 13:46:40.392038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.392057] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:41.192 [2024-11-20 13:46:40.392790] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:41.192 [2024-11-20 13:46:40.392812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.392820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.192 [2024-11-20 13:46:40.392836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:26:41.192 [2024-11-20 13:46:40.392844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.394336] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:41.192 [2024-11-20 13:46:40.406768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.406830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:41.192 [2024-11-20 13:46:40.406847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.433 ms 00:26:41.192 [2024-11-20 13:46:40.406856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.406951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.406962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:41.192 [2024-11-20 13:46:40.406981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:41.192 [2024-11-20 13:46:40.406989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.414735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.414783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.192 [2024-11-20 13:46:40.414795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.657 ms 00:26:41.192 [2024-11-20 13:46:40.414808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.414893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.414902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.192 [2024-11-20 13:46:40.414910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:41.192 [2024-11-20 13:46:40.414917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.414992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.415001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:41.192 [2024-11-20 13:46:40.415010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:41.192 [2024-11-20 13:46:40.415017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.415047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:41.192 [2024-11-20 13:46:40.418864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.418904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.192 [2024-11-20 13:46:40.418916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.825 ms 00:26:41.192 [2024-11-20 13:46:40.418926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.418975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.192 [2024-11-20 13:46:40.418985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:41.192 [2024-11-20 13:46:40.418994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:41.192 [2024-11-20 13:46:40.419001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.192 [2024-11-20 13:46:40.419056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:41.192 [2024-11-20 13:46:40.419078] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:41.192 [2024-11-20 13:46:40.419115] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:41.192 [2024-11-20 13:46:40.419132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:41.192 [2024-11-20 13:46:40.419236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:41.192 [2024-11-20 13:46:40.419247] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:41.192 [2024-11-20 13:46:40.419258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:41.193 [2024-11-20 13:46:40.419268] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419277] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:41.193 [2024-11-20 13:46:40.419293] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:41.193 [2024-11-20 13:46:40.419301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:41.193 [2024-11-20 13:46:40.419310] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:41.193 [2024-11-20 13:46:40.419318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.193 [2024-11-20 13:46:40.419325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:41.193 [2024-11-20 13:46:40.419333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:26:41.193 [2024-11-20 13:46:40.419340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.193 [2024-11-20 13:46:40.419425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.193 [2024-11-20 13:46:40.419441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:41.193 [2024-11-20 13:46:40.419449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:41.193 [2024-11-20 13:46:40.419456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.193 [2024-11-20 13:46:40.419564] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:41.193 [2024-11-20 13:46:40.419575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:41.193 [2024-11-20 13:46:40.419583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:41.193 [2024-11-20 13:46:40.419606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:41.193 [2024-11-20 13:46:40.419628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.193 [2024-11-20 13:46:40.419642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:41.193 [2024-11-20 13:46:40.419651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:41.193 [2024-11-20 13:46:40.419657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.193 [2024-11-20 13:46:40.419663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:41.193 [2024-11-20 13:46:40.419671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:41.193 [2024-11-20 13:46:40.419683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:41.193 [2024-11-20 13:46:40.419697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:41.193 [2024-11-20 13:46:40.419716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:41.193 [2024-11-20 13:46:40.419736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:41.193 [2024-11-20 13:46:40.419756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:41.193 [2024-11-20 13:46:40.419776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:41.193 [2024-11-20 13:46:40.419796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.193 [2024-11-20 13:46:40.419809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:41.193 [2024-11-20 13:46:40.419816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:41.193 [2024-11-20 13:46:40.419822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.193 [2024-11-20 13:46:40.419829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:41.193 [2024-11-20 13:46:40.419836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:41.193 [2024-11-20 13:46:40.419842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:41.193 [2024-11-20 13:46:40.419856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:41.193 [2024-11-20 13:46:40.419862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:41.193 [2024-11-20 13:46:40.419877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:41.193 [2024-11-20 13:46:40.419884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.193 [2024-11-20 13:46:40.419899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:41.193 [2024-11-20 13:46:40.419905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:41.193 [2024-11-20 13:46:40.419912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:41.193 [2024-11-20 13:46:40.419918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:41.193 [2024-11-20 13:46:40.419925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:41.193 [2024-11-20 13:46:40.419931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:41.193 [2024-11-20 13:46:40.419939] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:41.193 [2024-11-20 13:46:40.419948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.419956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:41.193 [2024-11-20 13:46:40.419964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:41.193 [2024-11-20 13:46:40.419988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:41.193 [2024-11-20 13:46:40.419996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:41.193 [2024-11-20 13:46:40.420004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:41.193 [2024-11-20 13:46:40.420011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:41.193 [2024-11-20 13:46:40.420019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:41.193 [2024-11-20 13:46:40.420027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:41.193 [2024-11-20 13:46:40.420034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:41.193 [2024-11-20 13:46:40.420042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:41.193 [2024-11-20 13:46:40.420077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:41.193 [2024-11-20 13:46:40.420089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.193 [2024-11-20 13:46:40.420105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:41.193 [2024-11-20 13:46:40.420113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:41.193 [2024-11-20 13:46:40.420120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:41.193 [2024-11-20 13:46:40.420128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.193 [2024-11-20 13:46:40.420137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:41.193 [2024-11-20 13:46:40.420145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:26:41.194 [2024-11-20 13:46:40.420152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.451211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.451269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.194 [2024-11-20 13:46:40.451283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.012 ms 00:26:41.194 [2024-11-20 13:46:40.451291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.451400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.451409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:41.194 [2024-11-20 13:46:40.451418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:41.194 [2024-11-20 13:46:40.451426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.502495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.502563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.194 [2024-11-20 13:46:40.502579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.988 ms 00:26:41.194 [2024-11-20 13:46:40.502588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.502657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.502669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.194 [2024-11-20 13:46:40.502683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:41.194 [2024-11-20 13:46:40.502690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.503292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.503324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.194 [2024-11-20 13:46:40.503334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:26:41.194 [2024-11-20 13:46:40.503342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.503483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.503494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.194 [2024-11-20 13:46:40.503502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:26:41.194 [2024-11-20 13:46:40.503515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.519155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.519210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.194 [2024-11-20 13:46:40.519226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.617 ms 00:26:41.194 [2024-11-20 13:46:40.519234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.533297] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:41.194 [2024-11-20 13:46:40.533365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:41.194 [2024-11-20 13:46:40.533380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.533389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:41.194 [2024-11-20 13:46:40.533402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:26:41.194 [2024-11-20 13:46:40.533410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.559141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.559210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:41.194 [2024-11-20 13:46:40.559225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.657 ms 00:26:41.194 [2024-11-20 13:46:40.559235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.572627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.572690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:41.194 [2024-11-20 13:46:40.572705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.316 ms 00:26:41.194 [2024-11-20 13:46:40.572713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.585749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.585813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:41.194 [2024-11-20 13:46:40.585826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.974 ms 00:26:41.194 [2024-11-20 13:46:40.585833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.194 [2024-11-20 13:46:40.586562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.194 [2024-11-20 13:46:40.586587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:41.194 [2024-11-20 13:46:40.586598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:26:41.194 [2024-11-20 13:46:40.586610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.645945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.646015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:41.451 [2024-11-20 13:46:40.646036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.312 ms 00:26:41.451 [2024-11-20 13:46:40.646044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.657026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:41.451 [2024-11-20 13:46:40.660001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.660039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:41.451 [2024-11-20 13:46:40.660053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:26:41.451 [2024-11-20 13:46:40.660063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.660162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.660173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:41.451 [2024-11-20 13:46:40.660182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:41.451 [2024-11-20 13:46:40.660192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.660255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.660274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:41.451 [2024-11-20 13:46:40.660283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:41.451 [2024-11-20 13:46:40.660291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.660309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.660317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:41.451 [2024-11-20 13:46:40.660325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.451 [2024-11-20 13:46:40.660332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.660365] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:41.451 [2024-11-20 13:46:40.660375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.660382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:41.451 [2024-11-20 13:46:40.660390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:41.451 [2024-11-20 13:46:40.660397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.684914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.684964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:41.451 [2024-11-20 13:46:40.684984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.497 ms 00:26:41.451 [2024-11-20 13:46:40.684999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.685088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.451 [2024-11-20 13:46:40.685098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:41.451 [2024-11-20 13:46:40.685106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:41.451 [2024-11-20 13:46:40.685114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.451 [2024-11-20 13:46:40.686131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.826 ms, result 0 00:26:42.865  [2024-11-20T13:46:43.276Z] Copying: 41/1024 [MB] (41 MBps) [2024-11-20T13:46:44.218Z] Copying: 87/1024 [MB] (46 MBps) [2024-11-20T13:46:45.152Z] Copying: 112/1024 [MB] (24 MBps) [2024-11-20T13:46:46.085Z] Copying: 141/1024 [MB] (29 MBps) [2024-11-20T13:46:47.023Z] Copying: 188/1024 [MB] (46 MBps) [2024-11-20T13:46:47.956Z] Copying: 235/1024 [MB] (47 MBps) [2024-11-20T13:46:48.890Z] Copying: 279/1024 [MB] (43 MBps) [2024-11-20T13:46:50.265Z] Copying: 323/1024 [MB] (44 MBps) [2024-11-20T13:46:51.213Z] Copying: 369/1024 [MB] (46 MBps) [2024-11-20T13:46:52.174Z] Copying: 417/1024 [MB] (47 MBps) [2024-11-20T13:46:53.107Z] Copying: 463/1024 [MB] (46 MBps) [2024-11-20T13:46:54.041Z] Copying: 510/1024 [MB] (47 MBps) [2024-11-20T13:46:54.976Z] Copying: 555/1024 [MB] (44 MBps) [2024-11-20T13:46:55.912Z] Copying: 588/1024 [MB] (32 MBps) [2024-11-20T13:46:56.920Z] Copying: 618/1024 [MB] (30 MBps) [2024-11-20T13:46:58.299Z] Copying: 645/1024 [MB] (26 MBps) [2024-11-20T13:46:59.234Z] Copying: 670/1024 [MB] (24 MBps) [2024-11-20T13:47:00.168Z] Copying: 713/1024 [MB] (42 MBps) [2024-11-20T13:47:01.099Z] Copying: 750/1024 [MB] (37 MBps) [2024-11-20T13:47:02.056Z] Copying: 795/1024 [MB] (44 MBps) [2024-11-20T13:47:02.991Z] Copying: 844/1024 [MB] (49 MBps) [2024-11-20T13:47:03.923Z] Copying: 895/1024 [MB] (50 MBps) [2024-11-20T13:47:05.295Z] Copying: 941/1024 [MB] (45 MBps) [2024-11-20T13:47:05.862Z] Copying: 989/1024 [MB] (47 MBps) [2024-11-20T13:47:06.869Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-20 13:47:06.487266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.442 [2024-11-20 13:47:06.487322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:07.442 [2024-11-20 13:47:06.487335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.442 [2024-11-20 13:47:06.487343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.442 [2024-11-20 13:47:06.487364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:07.442 [2024-11-20 13:47:06.490034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.442 [2024-11-20 13:47:06.490067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:07.442 [2024-11-20 13:47:06.490083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:27:07.442 [2024-11-20 13:47:06.490092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.442 [2024-11-20 13:47:06.490307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.442 [2024-11-20 13:47:06.490322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:07.442 [2024-11-20 13:47:06.490330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:27:07.442 [2024-11-20 13:47:06.490337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.442 [2024-11-20 13:47:06.494443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.442 [2024-11-20 13:47:06.494469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:07.442 [2024-11-20 13:47:06.494480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.093 ms 00:27:07.443 [2024-11-20 13:47:06.494488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.501119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.501150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:07.443 [2024-11-20 13:47:06.501160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.610 ms 00:27:07.443 [2024-11-20 13:47:06.501169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.526347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.526391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:07.443 [2024-11-20 13:47:06.526404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.115 ms 00:27:07.443 [2024-11-20 13:47:06.526412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.541674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.541717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:07.443 [2024-11-20 13:47:06.541731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.233 ms 00:27:07.443 [2024-11-20 13:47:06.541739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.541874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.541893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:07.443 [2024-11-20 13:47:06.541903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:07.443 [2024-11-20 13:47:06.541911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.565927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.565983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:07.443 [2024-11-20 13:47:06.565996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:27:07.443 [2024-11-20 13:47:06.566003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.589689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.589741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:07.443 [2024-11-20 13:47:06.589752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.663 ms 00:27:07.443 [2024-11-20 13:47:06.589760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.611804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.611849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:07.443 [2024-11-20 13:47:06.611861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.020 ms 00:27:07.443 [2024-11-20 13:47:06.611868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.633416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.443 [2024-11-20 13:47:06.633458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:07.443 [2024-11-20 13:47:06.633470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.498 ms 00:27:07.443 [2024-11-20 13:47:06.633477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.443 [2024-11-20 13:47:06.633498] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:07.443 [2024-11-20 13:47:06.633511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:07.443 [2024-11-20 13:47:06.633987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.633996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:07.444 [2024-11-20 13:47:06.634297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:07.444 [2024-11-20 13:47:06.634307] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d59d0621-983a-490d-b4c2-bda30131d214 00:27:07.444 [2024-11-20 13:47:06.634315] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:07.444 [2024-11-20 13:47:06.634322] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:07.444 [2024-11-20 13:47:06.634329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:07.444 [2024-11-20 13:47:06.634336] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:07.444 [2024-11-20 13:47:06.634344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:07.444 [2024-11-20 13:47:06.634351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:07.444 [2024-11-20 13:47:06.634363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:07.444 [2024-11-20 13:47:06.634370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:07.444 [2024-11-20 13:47:06.634376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:07.444 [2024-11-20 13:47:06.634383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:47:06.634391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:07.444 [2024-11-20 13:47:06.634399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:27:07.444 [2024-11-20 13:47:06.634406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.646629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:47:06.646670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:07.444 [2024-11-20 13:47:06.646682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:27:07.444 [2024-11-20 13:47:06.646691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.647057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:47:06.647072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:07.444 [2024-11-20 13:47:06.647080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:27:07.444 [2024-11-20 13:47:06.647095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.679689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.679746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.444 [2024-11-20 13:47:06.679759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.679767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.679833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.679841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.444 [2024-11-20 13:47:06.679849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.679860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.679927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.679941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.444 [2024-11-20 13:47:06.679954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.679965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.679993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.680001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.444 [2024-11-20 13:47:06.680008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.680015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.757726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.757777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.444 [2024-11-20 13:47:06.757788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.757796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.827794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.827858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.444 [2024-11-20 13:47:06.827872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.827888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.827987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.828005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.444 [2024-11-20 13:47:06.828015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.828023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.828064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.828089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.444 [2024-11-20 13:47:06.828102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.828113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.828217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.828230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.444 [2024-11-20 13:47:06.828243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.444 [2024-11-20 13:47:06.828255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:47:06.828288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.444 [2024-11-20 13:47:06.828302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:07.445 [2024-11-20 13:47:06.828311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.445 [2024-11-20 13:47:06.828318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:47:06.828359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.445 [2024-11-20 13:47:06.828368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.445 [2024-11-20 13:47:06.828378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.445 [2024-11-20 13:47:06.828391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:47:06.828436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.445 [2024-11-20 13:47:06.828450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.445 [2024-11-20 13:47:06.828460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.445 [2024-11-20 13:47:06.828467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:47:06.828595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.293 ms, result 0 00:27:08.397 00:27:08.397 00:27:08.397 13:47:07 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:10.296 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:10.296 13:47:09 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:10.296 [2024-11-20 13:47:09.721666] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:27:10.296 [2024-11-20 13:47:09.721787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78012 ] 00:27:10.553 [2024-11-20 13:47:09.880643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.811 [2024-11-20 13:47:09.991100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.069 [2024-11-20 13:47:10.248448] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:11.069 [2024-11-20 13:47:10.248510] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:11.069 [2024-11-20 13:47:10.401261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.069 [2024-11-20 13:47:10.401313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:11.069 [2024-11-20 13:47:10.401329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:11.069 [2024-11-20 13:47:10.401337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.069 [2024-11-20 13:47:10.401378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.069 [2024-11-20 13:47:10.401388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.070 [2024-11-20 13:47:10.401398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:11.070 [2024-11-20 13:47:10.401406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.401425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:11.070 [2024-11-20 13:47:10.402118] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:11.070 [2024-11-20 13:47:10.402140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.402148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.070 [2024-11-20 13:47:10.402157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:27:11.070 [2024-11-20 13:47:10.402164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.403159] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:11.070 [2024-11-20 13:47:10.415269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.415309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:11.070 [2024-11-20 13:47:10.415322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.112 ms 00:27:11.070 [2024-11-20 13:47:10.415330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.415383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.415393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:11.070 [2024-11-20 13:47:10.415401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:11.070 [2024-11-20 13:47:10.415408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.420056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.420087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.070 [2024-11-20 13:47:10.420096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:27:11.070 [2024-11-20 13:47:10.420108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.420172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.420180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.070 [2024-11-20 13:47:10.420188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:11.070 [2024-11-20 13:47:10.420195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.420238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.420247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:11.070 [2024-11-20 13:47:10.420255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:11.070 [2024-11-20 13:47:10.420262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.420285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:11.070 [2024-11-20 13:47:10.423601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.423629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.070 [2024-11-20 13:47:10.423638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:27:11.070 [2024-11-20 13:47:10.423648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.423675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.423682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:11.070 [2024-11-20 13:47:10.423690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:11.070 [2024-11-20 13:47:10.423697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.423715] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:11.070 [2024-11-20 13:47:10.423733] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:11.070 [2024-11-20 13:47:10.423767] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:11.070 [2024-11-20 13:47:10.423783] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:11.070 [2024-11-20 13:47:10.423885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:11.070 [2024-11-20 13:47:10.423901] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:11.070 [2024-11-20 13:47:10.423912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:11.070 [2024-11-20 13:47:10.423921] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:11.070 [2024-11-20 13:47:10.423930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:11.070 [2024-11-20 13:47:10.423939] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:11.070 [2024-11-20 13:47:10.423947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:11.070 [2024-11-20 13:47:10.423954] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:11.070 [2024-11-20 13:47:10.423963] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:11.070 [2024-11-20 13:47:10.423980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.423988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:11.070 [2024-11-20 13:47:10.423995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:27:11.070 [2024-11-20 13:47:10.424002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.424085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.070 [2024-11-20 13:47:10.424092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:11.070 [2024-11-20 13:47:10.424100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:11.070 [2024-11-20 13:47:10.424107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.070 [2024-11-20 13:47:10.424219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:11.070 [2024-11-20 13:47:10.424236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:11.070 [2024-11-20 13:47:10.424244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:11.070 [2024-11-20 13:47:10.424266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:11.070 [2024-11-20 13:47:10.424286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:11.070 [2024-11-20 13:47:10.424299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:11.070 [2024-11-20 13:47:10.424305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:11.070 [2024-11-20 13:47:10.424311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:11.070 [2024-11-20 13:47:10.424318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:11.070 [2024-11-20 13:47:10.424325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:11.070 [2024-11-20 13:47:10.424337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:11.070 [2024-11-20 13:47:10.424351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:11.070 [2024-11-20 13:47:10.424370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:11.070 [2024-11-20 13:47:10.424389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:11.070 [2024-11-20 13:47:10.424409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:11.070 [2024-11-20 13:47:10.424428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:11.070 [2024-11-20 13:47:10.424440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:11.070 [2024-11-20 13:47:10.424446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:11.070 [2024-11-20 13:47:10.424459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:11.070 [2024-11-20 13:47:10.424465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:11.070 [2024-11-20 13:47:10.424471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:11.070 [2024-11-20 13:47:10.424478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:11.070 [2024-11-20 13:47:10.424484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:11.070 [2024-11-20 13:47:10.424490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:11.070 [2024-11-20 13:47:10.424502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:11.070 [2024-11-20 13:47:10.424509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.070 [2024-11-20 13:47:10.424515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:11.070 [2024-11-20 13:47:10.424522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:11.071 [2024-11-20 13:47:10.424529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:11.071 [2024-11-20 13:47:10.424537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:11.071 [2024-11-20 13:47:10.424545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:11.071 [2024-11-20 13:47:10.424552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:11.071 [2024-11-20 13:47:10.424558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:11.071 [2024-11-20 13:47:10.424565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:11.071 [2024-11-20 13:47:10.424571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:11.071 [2024-11-20 13:47:10.424577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:11.071 [2024-11-20 13:47:10.424585] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:11.071 [2024-11-20 13:47:10.424593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:11.071 [2024-11-20 13:47:10.424608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:11.071 [2024-11-20 13:47:10.424615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:11.071 [2024-11-20 13:47:10.424622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:11.071 [2024-11-20 13:47:10.424629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:11.071 [2024-11-20 13:47:10.424635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:11.071 [2024-11-20 13:47:10.424642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:11.071 [2024-11-20 13:47:10.424649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:11.071 [2024-11-20 13:47:10.424655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:11.071 [2024-11-20 13:47:10.424662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:11.071 [2024-11-20 13:47:10.424696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:11.071 [2024-11-20 13:47:10.424705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:11.071 [2024-11-20 13:47:10.424720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:11.071 [2024-11-20 13:47:10.424727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:11.071 [2024-11-20 13:47:10.424734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:11.071 [2024-11-20 13:47:10.424741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.424748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:11.071 [2024-11-20 13:47:10.424755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:27:11.071 [2024-11-20 13:47:10.424762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.449982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.450017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.071 [2024-11-20 13:47:10.450027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.167 ms 00:27:11.071 [2024-11-20 13:47:10.450034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.450118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.450126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:11.071 [2024-11-20 13:47:10.450134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:11.071 [2024-11-20 13:47:10.450141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.491659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.491705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.071 [2024-11-20 13:47:10.491717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.471 ms 00:27:11.071 [2024-11-20 13:47:10.491724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.491767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.491777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:11.071 [2024-11-20 13:47:10.491789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:11.071 [2024-11-20 13:47:10.491795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.492159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.492183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:11.071 [2024-11-20 13:47:10.492192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:27:11.071 [2024-11-20 13:47:10.492200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.071 [2024-11-20 13:47:10.492319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.071 [2024-11-20 13:47:10.492337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:11.071 [2024-11-20 13:47:10.492345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:11.071 [2024-11-20 13:47:10.492357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.505221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.505253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:11.330 [2024-11-20 13:47:10.505262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.845 ms 00:27:11.330 [2024-11-20 13:47:10.505272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.517383] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:11.330 [2024-11-20 13:47:10.517419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:11.330 [2024-11-20 13:47:10.517431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.517439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:11.330 [2024-11-20 13:47:10.517448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.052 ms 00:27:11.330 [2024-11-20 13:47:10.517455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.541420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.541458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:11.330 [2024-11-20 13:47:10.541470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.924 ms 00:27:11.330 [2024-11-20 13:47:10.541479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.552833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.552865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:11.330 [2024-11-20 13:47:10.552874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.302 ms 00:27:11.330 [2024-11-20 13:47:10.552882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.563825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.563856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:11.330 [2024-11-20 13:47:10.563866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.912 ms 00:27:11.330 [2024-11-20 13:47:10.563872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.564479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.564505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:11.330 [2024-11-20 13:47:10.564513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:27:11.330 [2024-11-20 13:47:10.564523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.618047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.618097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:11.330 [2024-11-20 13:47:10.618114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.506 ms 00:27:11.330 [2024-11-20 13:47:10.618123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.628620] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:11.330 [2024-11-20 13:47:10.631279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.631314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:11.330 [2024-11-20 13:47:10.631328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:27:11.330 [2024-11-20 13:47:10.631336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.631435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.631446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:11.330 [2024-11-20 13:47:10.631454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:11.330 [2024-11-20 13:47:10.631464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.631528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.631547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:11.330 [2024-11-20 13:47:10.631556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:11.330 [2024-11-20 13:47:10.631563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.631583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.631591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:11.330 [2024-11-20 13:47:10.631598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:11.330 [2024-11-20 13:47:10.631605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.631637] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:11.330 [2024-11-20 13:47:10.631658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.631666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:11.330 [2024-11-20 13:47:10.631674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:11.330 [2024-11-20 13:47:10.631681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.654510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.654555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:11.330 [2024-11-20 13:47:10.654568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.813 ms 00:27:11.330 [2024-11-20 13:47:10.654581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.654652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.330 [2024-11-20 13:47:10.654661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:11.330 [2024-11-20 13:47:10.654670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:11.330 [2024-11-20 13:47:10.654677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.330 [2024-11-20 13:47:10.655929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.245 ms, result 0 00:27:12.261  [2024-11-20T13:47:13.059Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-20T13:47:13.992Z] Copying: 88/1024 [MB] (44 MBps) [2024-11-20T13:47:14.925Z] Copying: 134/1024 [MB] (46 MBps) [2024-11-20T13:47:15.857Z] Copying: 180/1024 [MB] (46 MBps) [2024-11-20T13:47:16.789Z] Copying: 224/1024 [MB] (44 MBps) [2024-11-20T13:47:17.765Z] Copying: 270/1024 [MB] (45 MBps) [2024-11-20T13:47:18.699Z] Copying: 317/1024 [MB] (47 MBps) [2024-11-20T13:47:20.072Z] Copying: 361/1024 [MB] (43 MBps) [2024-11-20T13:47:21.006Z] Copying: 403/1024 [MB] (42 MBps) [2024-11-20T13:47:21.943Z] Copying: 446/1024 [MB] (42 MBps) [2024-11-20T13:47:22.878Z] Copying: 489/1024 [MB] (43 MBps) [2024-11-20T13:47:23.811Z] Copying: 534/1024 [MB] (45 MBps) [2024-11-20T13:47:24.745Z] Copying: 580/1024 [MB] (45 MBps) [2024-11-20T13:47:25.679Z] Copying: 626/1024 [MB] (45 MBps) [2024-11-20T13:47:27.053Z] Copying: 672/1024 [MB] (45 MBps) [2024-11-20T13:47:27.990Z] Copying: 718/1024 [MB] (46 MBps) [2024-11-20T13:47:28.925Z] Copying: 763/1024 [MB] (45 MBps) [2024-11-20T13:47:29.856Z] Copying: 809/1024 [MB] (45 MBps) [2024-11-20T13:47:30.789Z] Copying: 852/1024 [MB] (43 MBps) [2024-11-20T13:47:31.723Z] Copying: 897/1024 [MB] (44 MBps) [2024-11-20T13:47:33.096Z] Copying: 941/1024 [MB] (44 MBps) [2024-11-20T13:47:34.029Z] Copying: 986/1024 [MB] (44 MBps) [2024-11-20T13:47:34.595Z] Copying: 1023/1024 [MB] (37 MBps) [2024-11-20T13:47:34.595Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-20 13:47:34.470446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.470586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:35.168 [2024-11-20 13:47:34.470647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:35.168 [2024-11-20 13:47:34.470679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.471595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:35.168 [2024-11-20 13:47:34.475941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.476053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:35.168 [2024-11-20 13:47:34.476104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:27:35.168 [2024-11-20 13:47:34.476122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.486817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.486918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:35.168 [2024-11-20 13:47:34.486978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:27:35.168 [2024-11-20 13:47:34.487004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.501807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.501914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:35.168 [2024-11-20 13:47:34.501964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:27:35.168 [2024-11-20 13:47:34.501991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.506845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.506930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:35.168 [2024-11-20 13:47:34.506981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:27:35.168 [2024-11-20 13:47:34.507000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.525808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.525946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:35.168 [2024-11-20 13:47:34.526003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.759 ms 00:27:35.168 [2024-11-20 13:47:34.526022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.538441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.538550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:35.168 [2024-11-20 13:47:34.538565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.381 ms 00:27:35.168 [2024-11-20 13:47:34.538571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.168 [2024-11-20 13:47:34.583716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.168 [2024-11-20 13:47:34.583763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:35.168 [2024-11-20 13:47:34.583774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.114 ms 00:27:35.168 [2024-11-20 13:47:34.583780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.427 [2024-11-20 13:47:34.602617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.427 [2024-11-20 13:47:34.602652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:35.427 [2024-11-20 13:47:34.602662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.824 ms 00:27:35.427 [2024-11-20 13:47:34.602669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.427 [2024-11-20 13:47:34.620468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.427 [2024-11-20 13:47:34.620509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:35.427 [2024-11-20 13:47:34.620518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.770 ms 00:27:35.427 [2024-11-20 13:47:34.620525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.427 [2024-11-20 13:47:34.638244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.427 [2024-11-20 13:47:34.638273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:35.427 [2024-11-20 13:47:34.638281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:27:35.427 [2024-11-20 13:47:34.638286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.427 [2024-11-20 13:47:34.655760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.427 [2024-11-20 13:47:34.655798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:35.427 [2024-11-20 13:47:34.655807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.429 ms 00:27:35.427 [2024-11-20 13:47:34.655813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.427 [2024-11-20 13:47:34.655842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:35.427 [2024-11-20 13:47:34.655854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117760 / 261120 wr_cnt: 1 state: open 00:27:35.427 [2024-11-20 13:47:34.655863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:35.427 [2024-11-20 13:47:34.655918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.655998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:35.428 [2024-11-20 13:47:34.656472] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:35.429 [2024-11-20 13:47:34.656478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d59d0621-983a-490d-b4c2-bda30131d214 00:27:35.429 [2024-11-20 13:47:34.656485] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117760 00:27:35.429 [2024-11-20 13:47:34.656491] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118720 00:27:35.429 [2024-11-20 13:47:34.656496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117760 00:27:35.429 [2024-11-20 13:47:34.656503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:27:35.429 [2024-11-20 13:47:34.656509] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:35.429 [2024-11-20 13:47:34.656519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:35.429 [2024-11-20 13:47:34.656536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:35.429 [2024-11-20 13:47:34.656542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:35.429 [2024-11-20 13:47:34.656547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:35.429 [2024-11-20 13:47:34.656552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.429 [2024-11-20 13:47:34.656558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:35.429 [2024-11-20 13:47:34.656565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:27:35.429 [2024-11-20 13:47:34.656571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.666420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.429 [2024-11-20 13:47:34.666451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:35.429 [2024-11-20 13:47:34.666460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.837 ms 00:27:35.429 [2024-11-20 13:47:34.666470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.666751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.429 [2024-11-20 13:47:34.666758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:35.429 [2024-11-20 13:47:34.666764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:27:35.429 [2024-11-20 13:47:34.666770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.693045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.693088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.429 [2024-11-20 13:47:34.693098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.693104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.693157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.693163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.429 [2024-11-20 13:47:34.693169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.693175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.693232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.693241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.429 [2024-11-20 13:47:34.693251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.693257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.693269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.693275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.429 [2024-11-20 13:47:34.693281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.693287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.755139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.755187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.429 [2024-11-20 13:47:34.755201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.755208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.804875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.804920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.429 [2024-11-20 13:47:34.804929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.804935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.429 [2024-11-20 13:47:34.805029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.429 [2024-11-20 13:47:34.805077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.429 [2024-11-20 13:47:34.805166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:35.429 [2024-11-20 13:47:34.805209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.429 [2024-11-20 13:47:34.805254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.429 [2024-11-20 13:47:34.805300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.429 [2024-11-20 13:47:34.805306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.429 [2024-11-20 13:47:34.805312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.429 [2024-11-20 13:47:34.805402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.094 ms, result 0 00:27:37.954 00:27:37.954 00:27:37.954 13:47:37 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:37.954 [2024-11-20 13:47:37.331756] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:27:37.954 [2024-11-20 13:47:37.331885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78280 ] 00:27:38.212 [2024-11-20 13:47:37.488930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.212 [2024-11-20 13:47:37.588297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.469 [2024-11-20 13:47:37.847711] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.469 [2024-11-20 13:47:37.847774] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.728 [2024-11-20 13:47:38.000534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.000591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.729 [2024-11-20 13:47:38.000606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.729 [2024-11-20 13:47:38.000614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.000667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.000677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.729 [2024-11-20 13:47:38.000687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:38.729 [2024-11-20 13:47:38.000694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.000713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.729 [2024-11-20 13:47:38.001532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.729 [2024-11-20 13:47:38.001553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.001560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.729 [2024-11-20 13:47:38.001569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:27:38.729 [2024-11-20 13:47:38.001576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.002875] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:38.729 [2024-11-20 13:47:38.015554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.015609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:38.729 [2024-11-20 13:47:38.015623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:27:38.729 [2024-11-20 13:47:38.015631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.015709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.015719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:38.729 [2024-11-20 13:47:38.015727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:38.729 [2024-11-20 13:47:38.015735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.021199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.021241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.729 [2024-11-20 13:47:38.021254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.388 ms 00:27:38.729 [2024-11-20 13:47:38.021272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.021355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.021365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.729 [2024-11-20 13:47:38.021373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:38.729 [2024-11-20 13:47:38.021380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.021432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.021441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.729 [2024-11-20 13:47:38.021449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:38.729 [2024-11-20 13:47:38.021456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.021481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.729 [2024-11-20 13:47:38.024706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.024736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.729 [2024-11-20 13:47:38.024745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:27:38.729 [2024-11-20 13:47:38.024755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.024785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.024794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.729 [2024-11-20 13:47:38.024802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:38.729 [2024-11-20 13:47:38.024809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.024829] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:38.729 [2024-11-20 13:47:38.024854] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:38.729 [2024-11-20 13:47:38.024891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:38.729 [2024-11-20 13:47:38.024907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:38.729 [2024-11-20 13:47:38.025021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.729 [2024-11-20 13:47:38.025032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.729 [2024-11-20 13:47:38.025042] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.729 [2024-11-20 13:47:38.025053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025061] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025069] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.729 [2024-11-20 13:47:38.025076] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.729 [2024-11-20 13:47:38.025083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.729 [2024-11-20 13:47:38.025092] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.729 [2024-11-20 13:47:38.025100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.025107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.729 [2024-11-20 13:47:38.025115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:38.729 [2024-11-20 13:47:38.025121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.025203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.729 [2024-11-20 13:47:38.025211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.729 [2024-11-20 13:47:38.025218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:38.729 [2024-11-20 13:47:38.025225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.729 [2024-11-20 13:47:38.025328] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.729 [2024-11-20 13:47:38.025337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.729 [2024-11-20 13:47:38.025345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.729 [2024-11-20 13:47:38.025366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.729 [2024-11-20 13:47:38.025386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.729 [2024-11-20 13:47:38.025399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.729 [2024-11-20 13:47:38.025405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.729 [2024-11-20 13:47:38.025411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.729 [2024-11-20 13:47:38.025418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.729 [2024-11-20 13:47:38.025425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.729 [2024-11-20 13:47:38.025437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.729 [2024-11-20 13:47:38.025452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.729 [2024-11-20 13:47:38.025471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.729 [2024-11-20 13:47:38.025490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.729 [2024-11-20 13:47:38.025508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.729 [2024-11-20 13:47:38.025527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.729 [2024-11-20 13:47:38.025539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.729 [2024-11-20 13:47:38.025546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.729 [2024-11-20 13:47:38.025558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.729 [2024-11-20 13:47:38.025564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.729 [2024-11-20 13:47:38.025571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.729 [2024-11-20 13:47:38.025577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.729 [2024-11-20 13:47:38.025583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.729 [2024-11-20 13:47:38.025589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.729 [2024-11-20 13:47:38.025596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.729 [2024-11-20 13:47:38.025602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.730 [2024-11-20 13:47:38.025609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.730 [2024-11-20 13:47:38.025615] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.730 [2024-11-20 13:47:38.025622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.730 [2024-11-20 13:47:38.025629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.730 [2024-11-20 13:47:38.025636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.730 [2024-11-20 13:47:38.025643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.730 [2024-11-20 13:47:38.025650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.730 [2024-11-20 13:47:38.025657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.730 [2024-11-20 13:47:38.025663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.730 [2024-11-20 13:47:38.025669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.730 [2024-11-20 13:47:38.025676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.730 [2024-11-20 13:47:38.025683] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.730 [2024-11-20 13:47:38.025692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.730 [2024-11-20 13:47:38.025708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.730 [2024-11-20 13:47:38.025715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.730 [2024-11-20 13:47:38.025721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.730 [2024-11-20 13:47:38.025729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.730 [2024-11-20 13:47:38.025736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.730 [2024-11-20 13:47:38.025742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.730 [2024-11-20 13:47:38.025749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.730 [2024-11-20 13:47:38.025755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.730 [2024-11-20 13:47:38.025762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.730 [2024-11-20 13:47:38.025796] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.730 [2024-11-20 13:47:38.025805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.730 [2024-11-20 13:47:38.025819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.730 [2024-11-20 13:47:38.025826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.730 [2024-11-20 13:47:38.025833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.730 [2024-11-20 13:47:38.025841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.025847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.730 [2024-11-20 13:47:38.025854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:27:38.730 [2024-11-20 13:47:38.025861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.051542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.051586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.730 [2024-11-20 13:47:38.051597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.615 ms 00:27:38.730 [2024-11-20 13:47:38.051605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.051699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.051707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.730 [2024-11-20 13:47:38.051716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:38.730 [2024-11-20 13:47:38.051723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.097414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.097473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.730 [2024-11-20 13:47:38.097486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.628 ms 00:27:38.730 [2024-11-20 13:47:38.097494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.097549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.097559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.730 [2024-11-20 13:47:38.097571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:38.730 [2024-11-20 13:47:38.097578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.097955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.097988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.730 [2024-11-20 13:47:38.097998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:27:38.730 [2024-11-20 13:47:38.098005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.098134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.098143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.730 [2024-11-20 13:47:38.098152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:38.730 [2024-11-20 13:47:38.098163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.110941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.110990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.730 [2024-11-20 13:47:38.111003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:27:38.730 [2024-11-20 13:47:38.111011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.123615] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:38.730 [2024-11-20 13:47:38.123663] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:38.730 [2024-11-20 13:47:38.123678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.123687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:38.730 [2024-11-20 13:47:38.123698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.565 ms 00:27:38.730 [2024-11-20 13:47:38.123707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.730 [2024-11-20 13:47:38.147748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.730 [2024-11-20 13:47:38.147805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:38.730 [2024-11-20 13:47:38.147818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.982 ms 00:27:38.730 [2024-11-20 13:47:38.147826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.159476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.159529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:39.064 [2024-11-20 13:47:38.159540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.591 ms 00:27:39.064 [2024-11-20 13:47:38.159547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.170895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.170935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:39.064 [2024-11-20 13:47:38.170947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.306 ms 00:27:39.064 [2024-11-20 13:47:38.170954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.171589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.171612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:39.064 [2024-11-20 13:47:38.171621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:27:39.064 [2024-11-20 13:47:38.171632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.227347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.227408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:39.064 [2024-11-20 13:47:38.227428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.697 ms 00:27:39.064 [2024-11-20 13:47:38.227436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.238815] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:39.064 [2024-11-20 13:47:38.241710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.241747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:39.064 [2024-11-20 13:47:38.241760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:27:39.064 [2024-11-20 13:47:38.241769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.241879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.241890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:39.064 [2024-11-20 13:47:38.241899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:39.064 [2024-11-20 13:47:38.241909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.243285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.243318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:39.064 [2024-11-20 13:47:38.243329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:27:39.064 [2024-11-20 13:47:38.243337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.243362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.243371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:39.064 [2024-11-20 13:47:38.243379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:39.064 [2024-11-20 13:47:38.243386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.243422] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:39.064 [2024-11-20 13:47:38.243431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.243439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:39.064 [2024-11-20 13:47:38.243447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:39.064 [2024-11-20 13:47:38.243453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.267696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.267748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:39.064 [2024-11-20 13:47:38.267761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.224 ms 00:27:39.064 [2024-11-20 13:47:38.267774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.267864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.064 [2024-11-20 13:47:38.267874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:39.064 [2024-11-20 13:47:38.267883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:39.064 [2024-11-20 13:47:38.267890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.064 [2024-11-20 13:47:38.268927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.955 ms, result 0 00:27:40.451  [2024-11-20T13:47:40.812Z] Copying: 41/1024 [MB] (41 MBps) [2024-11-20T13:47:41.743Z] Copying: 87/1024 [MB] (46 MBps) [2024-11-20T13:47:42.675Z] Copying: 134/1024 [MB] (46 MBps) [2024-11-20T13:47:43.636Z] Copying: 185/1024 [MB] (51 MBps) [2024-11-20T13:47:44.600Z] Copying: 232/1024 [MB] (47 MBps) [2024-11-20T13:47:45.532Z] Copying: 280/1024 [MB] (47 MBps) [2024-11-20T13:47:46.465Z] Copying: 327/1024 [MB] (47 MBps) [2024-11-20T13:47:47.471Z] Copying: 376/1024 [MB] (48 MBps) [2024-11-20T13:47:48.844Z] Copying: 424/1024 [MB] (48 MBps) [2024-11-20T13:47:49.775Z] Copying: 472/1024 [MB] (47 MBps) [2024-11-20T13:47:50.754Z] Copying: 518/1024 [MB] (46 MBps) [2024-11-20T13:47:51.689Z] Copying: 566/1024 [MB] (48 MBps) [2024-11-20T13:47:52.622Z] Copying: 614/1024 [MB] (47 MBps) [2024-11-20T13:47:53.554Z] Copying: 663/1024 [MB] (48 MBps) [2024-11-20T13:47:54.504Z] Copying: 709/1024 [MB] (46 MBps) [2024-11-20T13:47:55.877Z] Copying: 757/1024 [MB] (48 MBps) [2024-11-20T13:47:56.810Z] Copying: 802/1024 [MB] (44 MBps) [2024-11-20T13:47:57.743Z] Copying: 850/1024 [MB] (47 MBps) [2024-11-20T13:47:58.680Z] Copying: 893/1024 [MB] (42 MBps) [2024-11-20T13:47:59.610Z] Copying: 941/1024 [MB] (48 MBps) [2024-11-20T13:48:00.545Z] Copying: 989/1024 [MB] (48 MBps) [2024-11-20T13:48:00.545Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 13:48:00.266880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.266963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:01.118 [2024-11-20 13:48:00.267000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:01.118 [2024-11-20 13:48:00.267015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.267040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.118 [2024-11-20 13:48:00.272632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.272686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:01.118 [2024-11-20 13:48:00.272702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.574 ms 00:28:01.118 [2024-11-20 13:48:00.272714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.273084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.273113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:01.118 [2024-11-20 13:48:00.273127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:28:01.118 [2024-11-20 13:48:00.273139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.279933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.279991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:01.118 [2024-11-20 13:48:00.280005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:28:01.118 [2024-11-20 13:48:00.280017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.286685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.286716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:01.118 [2024-11-20 13:48:00.286726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.624 ms 00:28:01.118 [2024-11-20 13:48:00.286735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.310389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.310439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:01.118 [2024-11-20 13:48:00.310451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.604 ms 00:28:01.118 [2024-11-20 13:48:00.310460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.324578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.324630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:01.118 [2024-11-20 13:48:00.324643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.071 ms 00:28:01.118 [2024-11-20 13:48:00.324650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.379466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.379537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:01.118 [2024-11-20 13:48:00.379552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.370 ms 00:28:01.118 [2024-11-20 13:48:00.379561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.404316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.404380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:01.118 [2024-11-20 13:48:00.404393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.737 ms 00:28:01.118 [2024-11-20 13:48:00.404401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.427510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.427561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:01.118 [2024-11-20 13:48:00.427584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.987 ms 00:28:01.118 [2024-11-20 13:48:00.427592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.450019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.450066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:01.118 [2024-11-20 13:48:00.450079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:28:01.118 [2024-11-20 13:48:00.450087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.472891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.118 [2024-11-20 13:48:00.472934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:01.118 [2024-11-20 13:48:00.472946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:28:01.118 [2024-11-20 13:48:00.472954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.118 [2024-11-20 13:48:00.473544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:01.118 [2024-11-20 13:48:00.473598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:01.118 [2024-11-20 13:48:00.473612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.473992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:01.118 [2024-11-20 13:48:00.474202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:01.119 [2024-11-20 13:48:00.474387] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:01.119 [2024-11-20 13:48:00.474396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d59d0621-983a-490d-b4c2-bda30131d214 00:28:01.119 [2024-11-20 13:48:00.474405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:01.119 [2024-11-20 13:48:00.474412] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14272 00:28:01.119 [2024-11-20 13:48:00.474419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13312 00:28:01.119 [2024-11-20 13:48:00.474428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0721 00:28:01.119 [2024-11-20 13:48:00.474436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:01.119 [2024-11-20 13:48:00.474452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:01.119 [2024-11-20 13:48:00.474459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:01.119 [2024-11-20 13:48:00.474472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:01.119 [2024-11-20 13:48:00.474483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:01.119 [2024-11-20 13:48:00.474491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.119 [2024-11-20 13:48:00.474498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:01.119 [2024-11-20 13:48:00.474507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:28:01.119 [2024-11-20 13:48:00.474515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.486799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.119 [2024-11-20 13:48:00.486835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:01.119 [2024-11-20 13:48:00.486845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.242 ms 00:28:01.119 [2024-11-20 13:48:00.486860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.487231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.119 [2024-11-20 13:48:00.487251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:01.119 [2024-11-20 13:48:00.487260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:28:01.119 [2024-11-20 13:48:00.487267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.519654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.119 [2024-11-20 13:48:00.519705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.119 [2024-11-20 13:48:00.519716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.119 [2024-11-20 13:48:00.519724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.519787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.119 [2024-11-20 13:48:00.519795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.119 [2024-11-20 13:48:00.519802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.119 [2024-11-20 13:48:00.519809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.519867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.119 [2024-11-20 13:48:00.519882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.119 [2024-11-20 13:48:00.519894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.119 [2024-11-20 13:48:00.519901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.119 [2024-11-20 13:48:00.519915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.119 [2024-11-20 13:48:00.519923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.119 [2024-11-20 13:48:00.519930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.119 [2024-11-20 13:48:00.519937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.597615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.597665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.377 [2024-11-20 13:48:00.597681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.597689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.377 [2024-11-20 13:48:00.661466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.377 [2024-11-20 13:48:00.661556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.377 [2024-11-20 13:48:00.661618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.377 [2024-11-20 13:48:00.661814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:01.377 [2024-11-20 13:48:00.661887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.661928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.661941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.377 [2024-11-20 13:48:00.661948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.661955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.662018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.377 [2024-11-20 13:48:00.662029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.377 [2024-11-20 13:48:00.662037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.377 [2024-11-20 13:48:00.662044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.377 [2024-11-20 13:48:00.662156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.248 ms, result 0 00:28:01.941 00:28:01.941 00:28:02.200 13:48:01 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:04.100 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77198 00:28:04.100 13:48:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77198 ']' 00:28:04.100 13:48:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77198 00:28:04.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77198) - No such process 00:28:04.100 Process with pid 77198 is not found 00:28:04.100 13:48:03 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77198 is not found' 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:04.100 Remove shared memory files 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:04.100 13:48:03 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:04.100 00:28:04.100 real 2m13.263s 00:28:04.100 user 2m2.590s 00:28:04.100 sys 0m11.817s 00:28:04.100 13:48:03 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.100 13:48:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:04.100 ************************************ 00:28:04.100 END TEST ftl_restore 00:28:04.100 ************************************ 00:28:04.100 13:48:03 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:04.100 13:48:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:04.100 13:48:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.100 13:48:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:04.100 ************************************ 00:28:04.100 START TEST ftl_dirty_shutdown 00:28:04.100 ************************************ 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:04.100 * Looking for test storage... 00:28:04.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.100 --rc genhtml_branch_coverage=1 00:28:04.100 --rc genhtml_function_coverage=1 00:28:04.100 --rc genhtml_legend=1 00:28:04.100 --rc geninfo_all_blocks=1 00:28:04.100 --rc geninfo_unexecuted_blocks=1 00:28:04.100 00:28:04.100 ' 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.100 --rc genhtml_branch_coverage=1 00:28:04.100 --rc genhtml_function_coverage=1 00:28:04.100 --rc genhtml_legend=1 00:28:04.100 --rc geninfo_all_blocks=1 00:28:04.100 --rc geninfo_unexecuted_blocks=1 00:28:04.100 00:28:04.100 ' 00:28:04.100 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.100 --rc genhtml_branch_coverage=1 00:28:04.100 --rc genhtml_function_coverage=1 00:28:04.100 --rc genhtml_legend=1 00:28:04.100 --rc geninfo_all_blocks=1 00:28:04.101 --rc geninfo_unexecuted_blocks=1 00:28:04.101 00:28:04.101 ' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.101 --rc genhtml_branch_coverage=1 00:28:04.101 --rc genhtml_function_coverage=1 00:28:04.101 --rc genhtml_legend=1 00:28:04.101 --rc geninfo_all_blocks=1 00:28:04.101 --rc geninfo_unexecuted_blocks=1 00:28:04.101 00:28:04.101 ' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78620 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78620 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78620 ']' 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.101 13:48:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:04.101 [2024-11-20 13:48:03.383359] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:28:04.101 [2024-11-20 13:48:03.383524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78620 ] 00:28:04.358 [2024-11-20 13:48:03.546375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.358 [2024-11-20 13:48:03.647559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:04.923 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:05.267 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:05.267 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:05.267 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:05.267 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:05.268 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:05.268 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:05.268 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:05.268 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:05.526 { 00:28:05.526 "name": "nvme0n1", 00:28:05.526 "aliases": [ 00:28:05.526 "af4e509d-c875-4a59-8643-d3e7b31628a2" 00:28:05.526 ], 00:28:05.526 "product_name": "NVMe disk", 00:28:05.526 "block_size": 4096, 00:28:05.526 "num_blocks": 1310720, 00:28:05.526 "uuid": "af4e509d-c875-4a59-8643-d3e7b31628a2", 00:28:05.526 "numa_id": -1, 00:28:05.526 "assigned_rate_limits": { 00:28:05.526 "rw_ios_per_sec": 0, 00:28:05.526 "rw_mbytes_per_sec": 0, 00:28:05.526 "r_mbytes_per_sec": 0, 00:28:05.526 "w_mbytes_per_sec": 0 00:28:05.526 }, 00:28:05.526 "claimed": true, 00:28:05.526 "claim_type": "read_many_write_one", 00:28:05.526 "zoned": false, 00:28:05.526 "supported_io_types": { 00:28:05.526 "read": true, 00:28:05.526 "write": true, 00:28:05.526 "unmap": true, 00:28:05.526 "flush": true, 00:28:05.526 "reset": true, 00:28:05.526 "nvme_admin": true, 00:28:05.526 "nvme_io": true, 00:28:05.526 "nvme_io_md": false, 00:28:05.526 "write_zeroes": true, 00:28:05.526 "zcopy": false, 00:28:05.526 "get_zone_info": false, 00:28:05.526 "zone_management": false, 00:28:05.526 "zone_append": false, 00:28:05.526 "compare": true, 00:28:05.526 "compare_and_write": false, 00:28:05.526 "abort": true, 00:28:05.526 "seek_hole": false, 00:28:05.526 "seek_data": false, 00:28:05.526 "copy": true, 00:28:05.526 "nvme_iov_md": false 00:28:05.526 }, 00:28:05.526 "driver_specific": { 00:28:05.526 "nvme": [ 00:28:05.526 { 00:28:05.526 "pci_address": "0000:00:11.0", 00:28:05.526 "trid": { 00:28:05.526 "trtype": "PCIe", 00:28:05.526 "traddr": "0000:00:11.0" 00:28:05.526 }, 00:28:05.526 "ctrlr_data": { 00:28:05.526 "cntlid": 0, 00:28:05.526 "vendor_id": "0x1b36", 00:28:05.526 "model_number": "QEMU NVMe Ctrl", 00:28:05.526 "serial_number": "12341", 00:28:05.526 "firmware_revision": "8.0.0", 00:28:05.526 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:05.526 "oacs": { 00:28:05.526 "security": 0, 00:28:05.526 "format": 1, 00:28:05.526 "firmware": 0, 00:28:05.526 "ns_manage": 1 00:28:05.526 }, 00:28:05.526 "multi_ctrlr": false, 00:28:05.526 "ana_reporting": false 00:28:05.526 }, 00:28:05.526 "vs": { 00:28:05.526 "nvme_version": "1.4" 00:28:05.526 }, 00:28:05.526 "ns_data": { 00:28:05.526 "id": 1, 00:28:05.526 "can_share": false 00:28:05.526 } 00:28:05.526 } 00:28:05.526 ], 00:28:05.526 "mp_policy": "active_passive" 00:28:05.526 } 00:28:05.526 } 00:28:05.526 ]' 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:05.526 13:48:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:05.785 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ff93497a-1f17-4896-a075-7aaa70a53096 00:28:05.785 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:05.785 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff93497a-1f17-4896-a075-7aaa70a53096 00:28:06.043 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:06.043 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ef86f9ef-044c-4e6c-830d-935c93e90f8b 00:28:06.043 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef86f9ef-044c-4e6c-830d-935c93e90f8b 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:06.301 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:06.560 { 00:28:06.560 "name": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:06.560 "aliases": [ 00:28:06.560 "lvs/nvme0n1p0" 00:28:06.560 ], 00:28:06.560 "product_name": "Logical Volume", 00:28:06.560 "block_size": 4096, 00:28:06.560 "num_blocks": 26476544, 00:28:06.560 "uuid": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:06.560 "assigned_rate_limits": { 00:28:06.560 "rw_ios_per_sec": 0, 00:28:06.560 "rw_mbytes_per_sec": 0, 00:28:06.560 "r_mbytes_per_sec": 0, 00:28:06.560 "w_mbytes_per_sec": 0 00:28:06.560 }, 00:28:06.560 "claimed": false, 00:28:06.560 "zoned": false, 00:28:06.560 "supported_io_types": { 00:28:06.560 "read": true, 00:28:06.560 "write": true, 00:28:06.560 "unmap": true, 00:28:06.560 "flush": false, 00:28:06.560 "reset": true, 00:28:06.560 "nvme_admin": false, 00:28:06.560 "nvme_io": false, 00:28:06.560 "nvme_io_md": false, 00:28:06.560 "write_zeroes": true, 00:28:06.560 "zcopy": false, 00:28:06.560 "get_zone_info": false, 00:28:06.560 "zone_management": false, 00:28:06.560 "zone_append": false, 00:28:06.560 "compare": false, 00:28:06.560 "compare_and_write": false, 00:28:06.560 "abort": false, 00:28:06.560 "seek_hole": true, 00:28:06.560 "seek_data": true, 00:28:06.560 "copy": false, 00:28:06.560 "nvme_iov_md": false 00:28:06.560 }, 00:28:06.560 "driver_specific": { 00:28:06.560 "lvol": { 00:28:06.560 "lvol_store_uuid": "ef86f9ef-044c-4e6c-830d-935c93e90f8b", 00:28:06.560 "base_bdev": "nvme0n1", 00:28:06.560 "thin_provision": true, 00:28:06.560 "num_allocated_clusters": 0, 00:28:06.560 "snapshot": false, 00:28:06.560 "clone": false, 00:28:06.560 "esnap_clone": false 00:28:06.560 } 00:28:06.560 } 00:28:06.560 } 00:28:06.560 ]' 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:06.560 13:48:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:06.818 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:07.076 { 00:28:07.076 "name": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:07.076 "aliases": [ 00:28:07.076 "lvs/nvme0n1p0" 00:28:07.076 ], 00:28:07.076 "product_name": "Logical Volume", 00:28:07.076 "block_size": 4096, 00:28:07.076 "num_blocks": 26476544, 00:28:07.076 "uuid": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:07.076 "assigned_rate_limits": { 00:28:07.076 "rw_ios_per_sec": 0, 00:28:07.076 "rw_mbytes_per_sec": 0, 00:28:07.076 "r_mbytes_per_sec": 0, 00:28:07.076 "w_mbytes_per_sec": 0 00:28:07.076 }, 00:28:07.076 "claimed": false, 00:28:07.076 "zoned": false, 00:28:07.076 "supported_io_types": { 00:28:07.076 "read": true, 00:28:07.076 "write": true, 00:28:07.076 "unmap": true, 00:28:07.076 "flush": false, 00:28:07.076 "reset": true, 00:28:07.076 "nvme_admin": false, 00:28:07.076 "nvme_io": false, 00:28:07.076 "nvme_io_md": false, 00:28:07.076 "write_zeroes": true, 00:28:07.076 "zcopy": false, 00:28:07.076 "get_zone_info": false, 00:28:07.076 "zone_management": false, 00:28:07.076 "zone_append": false, 00:28:07.076 "compare": false, 00:28:07.076 "compare_and_write": false, 00:28:07.076 "abort": false, 00:28:07.076 "seek_hole": true, 00:28:07.076 "seek_data": true, 00:28:07.076 "copy": false, 00:28:07.076 "nvme_iov_md": false 00:28:07.076 }, 00:28:07.076 "driver_specific": { 00:28:07.076 "lvol": { 00:28:07.076 "lvol_store_uuid": "ef86f9ef-044c-4e6c-830d-935c93e90f8b", 00:28:07.076 "base_bdev": "nvme0n1", 00:28:07.076 "thin_provision": true, 00:28:07.076 "num_allocated_clusters": 0, 00:28:07.076 "snapshot": false, 00:28:07.076 "clone": false, 00:28:07.076 "esnap_clone": false 00:28:07.076 } 00:28:07.076 } 00:28:07.076 } 00:28:07.076 ]' 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:07.076 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:07.335 { 00:28:07.335 "name": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:07.335 "aliases": [ 00:28:07.335 "lvs/nvme0n1p0" 00:28:07.335 ], 00:28:07.335 "product_name": "Logical Volume", 00:28:07.335 "block_size": 4096, 00:28:07.335 "num_blocks": 26476544, 00:28:07.335 "uuid": "4d5c8ff1-809f-4c21-9fc2-e740c76c40e6", 00:28:07.335 "assigned_rate_limits": { 00:28:07.335 "rw_ios_per_sec": 0, 00:28:07.335 "rw_mbytes_per_sec": 0, 00:28:07.335 "r_mbytes_per_sec": 0, 00:28:07.335 "w_mbytes_per_sec": 0 00:28:07.335 }, 00:28:07.335 "claimed": false, 00:28:07.335 "zoned": false, 00:28:07.335 "supported_io_types": { 00:28:07.335 "read": true, 00:28:07.335 "write": true, 00:28:07.335 "unmap": true, 00:28:07.335 "flush": false, 00:28:07.335 "reset": true, 00:28:07.335 "nvme_admin": false, 00:28:07.335 "nvme_io": false, 00:28:07.335 "nvme_io_md": false, 00:28:07.335 "write_zeroes": true, 00:28:07.335 "zcopy": false, 00:28:07.335 "get_zone_info": false, 00:28:07.335 "zone_management": false, 00:28:07.335 "zone_append": false, 00:28:07.335 "compare": false, 00:28:07.335 "compare_and_write": false, 00:28:07.335 "abort": false, 00:28:07.335 "seek_hole": true, 00:28:07.335 "seek_data": true, 00:28:07.335 "copy": false, 00:28:07.335 "nvme_iov_md": false 00:28:07.335 }, 00:28:07.335 "driver_specific": { 00:28:07.335 "lvol": { 00:28:07.335 "lvol_store_uuid": "ef86f9ef-044c-4e6c-830d-935c93e90f8b", 00:28:07.335 "base_bdev": "nvme0n1", 00:28:07.335 "thin_provision": true, 00:28:07.335 "num_allocated_clusters": 0, 00:28:07.335 "snapshot": false, 00:28:07.335 "clone": false, 00:28:07.335 "esnap_clone": false 00:28:07.335 } 00:28:07.335 } 00:28:07.335 } 00:28:07.335 ]' 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:07.335 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 --l2p_dram_limit 10' 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:07.595 13:48:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4d5c8ff1-809f-4c21-9fc2-e740c76c40e6 --l2p_dram_limit 10 -c nvc0n1p0 00:28:07.595 [2024-11-20 13:48:06.926845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.926900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:07.595 [2024-11-20 13:48:06.926913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.595 [2024-11-20 13:48:06.926920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.926998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.927007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.595 [2024-11-20 13:48:06.927016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:07.595 [2024-11-20 13:48:06.927022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.927041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:07.595 [2024-11-20 13:48:06.927643] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:07.595 [2024-11-20 13:48:06.927669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.927676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.595 [2024-11-20 13:48:06.927684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:28:07.595 [2024-11-20 13:48:06.927691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.927813] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe 00:28:07.595 [2024-11-20 13:48:06.928903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.928934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:07.595 [2024-11-20 13:48:06.928942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:07.595 [2024-11-20 13:48:06.928950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.934225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.934263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.595 [2024-11-20 13:48:06.934271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.230 ms 00:28:07.595 [2024-11-20 13:48:06.934279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.934356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.934365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.595 [2024-11-20 13:48:06.934372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:07.595 [2024-11-20 13:48:06.934382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.934436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.934450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:07.595 [2024-11-20 13:48:06.934457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:07.595 [2024-11-20 13:48:06.934467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.934486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.595 [2024-11-20 13:48:06.937555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.937583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.595 [2024-11-20 13:48:06.937592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:28:07.595 [2024-11-20 13:48:06.937599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.937629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.937636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:07.595 [2024-11-20 13:48:06.937644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:07.595 [2024-11-20 13:48:06.937649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.595 [2024-11-20 13:48:06.937665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:07.595 [2024-11-20 13:48:06.937774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:07.595 [2024-11-20 13:48:06.937791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:07.595 [2024-11-20 13:48:06.937800] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:07.595 [2024-11-20 13:48:06.937810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:07.595 [2024-11-20 13:48:06.937817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:07.595 [2024-11-20 13:48:06.937825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:07.595 [2024-11-20 13:48:06.937831] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:07.595 [2024-11-20 13:48:06.937840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:07.595 [2024-11-20 13:48:06.937845] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:07.595 [2024-11-20 13:48:06.937853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.595 [2024-11-20 13:48:06.937859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:07.596 [2024-11-20 13:48:06.937868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:28:07.596 [2024-11-20 13:48:06.937879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.596 [2024-11-20 13:48:06.937948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.596 [2024-11-20 13:48:06.937955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:07.596 [2024-11-20 13:48:06.937963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:07.596 [2024-11-20 13:48:06.937984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.596 [2024-11-20 13:48:06.938070] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:07.596 [2024-11-20 13:48:06.938082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:07.596 [2024-11-20 13:48:06.938090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:07.596 [2024-11-20 13:48:06.938110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:07.596 [2024-11-20 13:48:06.938129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.596 [2024-11-20 13:48:06.938142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:07.596 [2024-11-20 13:48:06.938147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:07.596 [2024-11-20 13:48:06.938155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.596 [2024-11-20 13:48:06.938161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:07.596 [2024-11-20 13:48:06.938168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:07.596 [2024-11-20 13:48:06.938174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:07.596 [2024-11-20 13:48:06.938187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:07.596 [2024-11-20 13:48:06.938207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:07.596 [2024-11-20 13:48:06.938225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:07.596 [2024-11-20 13:48:06.938243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:07.596 [2024-11-20 13:48:06.938260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:07.596 [2024-11-20 13:48:06.938280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.596 [2024-11-20 13:48:06.938292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:07.596 [2024-11-20 13:48:06.938297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:07.596 [2024-11-20 13:48:06.938304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.596 [2024-11-20 13:48:06.938309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:07.596 [2024-11-20 13:48:06.938315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:07.596 [2024-11-20 13:48:06.938321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:07.596 [2024-11-20 13:48:06.938332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:07.596 [2024-11-20 13:48:06.938339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:07.596 [2024-11-20 13:48:06.938352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:07.596 [2024-11-20 13:48:06.938358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.596 [2024-11-20 13:48:06.938372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:07.596 [2024-11-20 13:48:06.938380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:07.596 [2024-11-20 13:48:06.938386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:07.596 [2024-11-20 13:48:06.938393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:07.596 [2024-11-20 13:48:06.938398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:07.596 [2024-11-20 13:48:06.938405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:07.596 [2024-11-20 13:48:06.938413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:07.596 [2024-11-20 13:48:06.938422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.596 [2024-11-20 13:48:06.938430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:07.596 [2024-11-20 13:48:06.938438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:07.596 [2024-11-20 13:48:06.938443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:07.596 [2024-11-20 13:48:06.938450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:07.596 [2024-11-20 13:48:06.938456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:07.596 [2024-11-20 13:48:06.938463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:07.596 [2024-11-20 13:48:06.938468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:07.596 [2024-11-20 13:48:06.938475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:07.597 [2024-11-20 13:48:06.938481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:07.597 [2024-11-20 13:48:06.938489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:07.597 [2024-11-20 13:48:06.938522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:07.597 [2024-11-20 13:48:06.938529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.597 [2024-11-20 13:48:06.938542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:07.597 [2024-11-20 13:48:06.938548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:07.597 [2024-11-20 13:48:06.938555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:07.597 [2024-11-20 13:48:06.938561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.597 [2024-11-20 13:48:06.938568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:07.597 [2024-11-20 13:48:06.938575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:28:07.597 [2024-11-20 13:48:06.938583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.597 [2024-11-20 13:48:06.938633] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:07.597 [2024-11-20 13:48:06.938649] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:10.143 [2024-11-20 13:48:09.072763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.072828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:10.143 [2024-11-20 13:48:09.072845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2134.120 ms 00:28:10.143 [2024-11-20 13:48:09.072871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.097706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.097760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.143 [2024-11-20 13:48:09.097774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.620 ms 00:28:10.143 [2024-11-20 13:48:09.097784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.097904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.097916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:10.143 [2024-11-20 13:48:09.097924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:10.143 [2024-11-20 13:48:09.097938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.128035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.128084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.143 [2024-11-20 13:48:09.128096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.045 ms 00:28:10.143 [2024-11-20 13:48:09.128105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.128141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.128154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.143 [2024-11-20 13:48:09.128162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:10.143 [2024-11-20 13:48:09.128171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.128535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.128561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.143 [2024-11-20 13:48:09.128570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:10.143 [2024-11-20 13:48:09.128580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.128694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.128709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.143 [2024-11-20 13:48:09.128719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:10.143 [2024-11-20 13:48:09.128731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.142444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.142483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.143 [2024-11-20 13:48:09.142493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.695 ms 00:28:10.143 [2024-11-20 13:48:09.142502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.166614] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:10.143 [2024-11-20 13:48:09.170055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.170089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:10.143 [2024-11-20 13:48:09.170104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.472 ms 00:28:10.143 [2024-11-20 13:48:09.170112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.222943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.223005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:10.143 [2024-11-20 13:48:09.223021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.781 ms 00:28:10.143 [2024-11-20 13:48:09.223029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.223216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.223230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:10.143 [2024-11-20 13:48:09.223244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:28:10.143 [2024-11-20 13:48:09.223251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.246811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.246872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:10.143 [2024-11-20 13:48:09.246887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.507 ms 00:28:10.143 [2024-11-20 13:48:09.246895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.269704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.269748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:10.143 [2024-11-20 13:48:09.269762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.761 ms 00:28:10.143 [2024-11-20 13:48:09.269770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.270342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.270365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:10.143 [2024-11-20 13:48:09.270376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:28:10.143 [2024-11-20 13:48:09.270385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.143 [2024-11-20 13:48:09.336663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.143 [2024-11-20 13:48:09.336717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:10.143 [2024-11-20 13:48:09.336735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.236 ms 00:28:10.143 [2024-11-20 13:48:09.336743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.360991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.144 [2024-11-20 13:48:09.361038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:10.144 [2024-11-20 13:48:09.361052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.161 ms 00:28:10.144 [2024-11-20 13:48:09.361060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.384804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.144 [2024-11-20 13:48:09.384866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:10.144 [2024-11-20 13:48:09.384879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.695 ms 00:28:10.144 [2024-11-20 13:48:09.384887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.408143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.144 [2024-11-20 13:48:09.408190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:10.144 [2024-11-20 13:48:09.408204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.210 ms 00:28:10.144 [2024-11-20 13:48:09.408211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.408256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.144 [2024-11-20 13:48:09.408265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:10.144 [2024-11-20 13:48:09.408277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:10.144 [2024-11-20 13:48:09.408285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.408369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.144 [2024-11-20 13:48:09.408379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:10.144 [2024-11-20 13:48:09.408391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:10.144 [2024-11-20 13:48:09.408398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.144 [2024-11-20 13:48:09.409311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2482.021 ms, result 0 00:28:10.144 { 00:28:10.144 "name": "ftl0", 00:28:10.144 "uuid": "fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe" 00:28:10.144 } 00:28:10.144 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:10.144 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:10.401 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:10.401 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:10.401 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:10.658 /dev/nbd0 00:28:10.658 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:10.659 1+0 records in 00:28:10.659 1+0 records out 00:28:10.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210093 s, 19.5 MB/s 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:10.659 13:48:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:10.659 [2024-11-20 13:48:09.945398] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:28:10.659 [2024-11-20 13:48:09.945515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78740 ] 00:28:10.916 [2024-11-20 13:48:10.103858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.916 [2024-11-20 13:48:10.209209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.289  [2024-11-20T13:48:12.648Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-20T13:48:13.581Z] Copying: 385/1024 [MB] (193 MBps) [2024-11-20T13:48:14.514Z] Copying: 592/1024 [MB] (207 MBps) [2024-11-20T13:48:15.449Z] Copying: 839/1024 [MB] (247 MBps) [2024-11-20T13:48:16.015Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:28:16.588 00:28:16.588 13:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:19.114 13:48:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:19.114 [2024-11-20 13:48:18.001788] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:28:19.114 [2024-11-20 13:48:18.001914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78827 ] 00:28:19.115 [2024-11-20 13:48:18.158022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.115 [2024-11-20 13:48:18.240471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.048  [2024-11-20T13:48:20.851Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T13:48:21.783Z] Copying: 49/1024 [MB] (25 MBps) [2024-11-20T13:48:22.751Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-20T13:48:23.683Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-20T13:48:24.615Z] Copying: 124/1024 [MB] (27 MBps) [2024-11-20T13:48:25.549Z] Copying: 153/1024 [MB] (28 MBps) [2024-11-20T13:48:26.482Z] Copying: 174/1024 [MB] (20 MBps) [2024-11-20T13:48:27.856Z] Copying: 199/1024 [MB] (25 MBps) [2024-11-20T13:48:28.422Z] Copying: 220/1024 [MB] (20 MBps) [2024-11-20T13:48:29.856Z] Copying: 250/1024 [MB] (30 MBps) [2024-11-20T13:48:30.422Z] Copying: 284/1024 [MB] (33 MBps) [2024-11-20T13:48:31.795Z] Copying: 313/1024 [MB] (28 MBps) [2024-11-20T13:48:32.767Z] Copying: 339/1024 [MB] (25 MBps) [2024-11-20T13:48:33.700Z] Copying: 368/1024 [MB] (29 MBps) [2024-11-20T13:48:34.632Z] Copying: 395/1024 [MB] (27 MBps) [2024-11-20T13:48:35.565Z] Copying: 424/1024 [MB] (29 MBps) [2024-11-20T13:48:36.498Z] Copying: 454/1024 [MB] (29 MBps) [2024-11-20T13:48:37.432Z] Copying: 482/1024 [MB] (28 MBps) [2024-11-20T13:48:38.803Z] Copying: 511/1024 [MB] (29 MBps) [2024-11-20T13:48:39.753Z] Copying: 540/1024 [MB] (28 MBps) [2024-11-20T13:48:40.683Z] Copying: 569/1024 [MB] (29 MBps) [2024-11-20T13:48:41.682Z] Copying: 597/1024 [MB] (27 MBps) [2024-11-20T13:48:42.616Z] Copying: 626/1024 [MB] (29 MBps) [2024-11-20T13:48:43.549Z] Copying: 656/1024 [MB] (29 MBps) [2024-11-20T13:48:44.556Z] Copying: 685/1024 [MB] (28 MBps) [2024-11-20T13:48:45.491Z] Copying: 714/1024 [MB] (29 MBps) [2024-11-20T13:48:46.425Z] Copying: 740/1024 [MB] (25 MBps) [2024-11-20T13:48:47.795Z] Copying: 767/1024 [MB] (27 MBps) [2024-11-20T13:48:48.728Z] Copying: 796/1024 [MB] (28 MBps) [2024-11-20T13:48:49.661Z] Copying: 825/1024 [MB] (29 MBps) [2024-11-20T13:48:50.593Z] Copying: 858/1024 [MB] (32 MBps) [2024-11-20T13:48:51.526Z] Copying: 888/1024 [MB] (30 MBps) [2024-11-20T13:48:52.459Z] Copying: 918/1024 [MB] (29 MBps) [2024-11-20T13:48:53.833Z] Copying: 947/1024 [MB] (28 MBps) [2024-11-20T13:48:54.766Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-20T13:48:55.332Z] Copying: 1002/1024 [MB] (31 MBps) [2024-11-20T13:48:55.897Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:28:56.470 00:28:56.470 13:48:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:56.470 13:48:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:56.727 13:48:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:56.986 [2024-11-20 13:48:56.162922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.162995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.986 [2024-11-20 13:48:56.163011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:56.986 [2024-11-20 13:48:56.163021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.163047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.986 [2024-11-20 13:48:56.165696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.165729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.986 [2024-11-20 13:48:56.165742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:28:56.986 [2024-11-20 13:48:56.165751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.167653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.167688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.986 [2024-11-20 13:48:56.167700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:28:56.986 [2024-11-20 13:48:56.167708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.182997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.183052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:56.986 [2024-11-20 13:48:56.183072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.262 ms 00:28:56.986 [2024-11-20 13:48:56.183080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.189231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.189265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:56.986 [2024-11-20 13:48:56.189277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:28:56.986 [2024-11-20 13:48:56.189284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.212946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.212995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:56.986 [2024-11-20 13:48:56.213009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.575 ms 00:28:56.986 [2024-11-20 13:48:56.213016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.227705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.227748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:56.986 [2024-11-20 13:48:56.227762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.646 ms 00:28:56.986 [2024-11-20 13:48:56.227773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.227928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.227939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:56.986 [2024-11-20 13:48:56.227949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:28:56.986 [2024-11-20 13:48:56.227956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.251406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.251453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:56.986 [2024-11-20 13:48:56.251466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.419 ms 00:28:56.986 [2024-11-20 13:48:56.251475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.274596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.274641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:56.986 [2024-11-20 13:48:56.274654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:28:56.986 [2024-11-20 13:48:56.274662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.297352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.297399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:56.986 [2024-11-20 13:48:56.297412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.638 ms 00:28:56.986 [2024-11-20 13:48:56.297419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.320553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.986 [2024-11-20 13:48:56.320601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:56.986 [2024-11-20 13:48:56.320615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.031 ms 00:28:56.986 [2024-11-20 13:48:56.320622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.986 [2024-11-20 13:48:56.320671] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.986 [2024-11-20 13:48:56.320686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.986 [2024-11-20 13:48:56.320799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.320994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.987 [2024-11-20 13:48:56.321457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.988 [2024-11-20 13:48:56.321567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.988 [2024-11-20 13:48:56.321576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe 00:28:56.988 [2024-11-20 13:48:56.321584] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:56.988 [2024-11-20 13:48:56.321594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:56.988 [2024-11-20 13:48:56.321601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:56.988 [2024-11-20 13:48:56.321613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:56.988 [2024-11-20 13:48:56.321619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.988 [2024-11-20 13:48:56.321629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.988 [2024-11-20 13:48:56.321636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.988 [2024-11-20 13:48:56.321644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.988 [2024-11-20 13:48:56.321650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.988 [2024-11-20 13:48:56.321659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.988 [2024-11-20 13:48:56.321666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.988 [2024-11-20 13:48:56.321676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:28:56.988 [2024-11-20 13:48:56.321684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.334376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.988 [2024-11-20 13:48:56.334421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.988 [2024-11-20 13:48:56.334435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:28:56.988 [2024-11-20 13:48:56.334443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.334805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.988 [2024-11-20 13:48:56.334821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.988 [2024-11-20 13:48:56.334831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:28:56.988 [2024-11-20 13:48:56.334838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.376562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.988 [2024-11-20 13:48:56.376617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.988 [2024-11-20 13:48:56.376630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.988 [2024-11-20 13:48:56.376638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.376708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.988 [2024-11-20 13:48:56.376716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.988 [2024-11-20 13:48:56.376726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.988 [2024-11-20 13:48:56.376733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.376832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.988 [2024-11-20 13:48:56.376844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.988 [2024-11-20 13:48:56.376854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.988 [2024-11-20 13:48:56.376861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.988 [2024-11-20 13:48:56.376890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.988 [2024-11-20 13:48:56.376899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.988 [2024-11-20 13:48:56.376908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.988 [2024-11-20 13:48:56.376914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.246 [2024-11-20 13:48:56.454840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.246 [2024-11-20 13:48:56.454891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.246 [2024-11-20 13:48:56.454905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.246 [2024-11-20 13:48:56.454913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.246 [2024-11-20 13:48:56.519045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.247 [2024-11-20 13:48:56.519105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.247 [2024-11-20 13:48:56.519206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.247 [2024-11-20 13:48:56.519296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.247 [2024-11-20 13:48:56.519417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:57.247 [2024-11-20 13:48:56.519474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.247 [2024-11-20 13:48:56.519535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.247 [2024-11-20 13:48:56.519596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.247 [2024-11-20 13:48:56.519605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.247 [2024-11-20 13:48:56.519612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.247 [2024-11-20 13:48:56.519738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.783 ms, result 0 00:28:57.247 true 00:28:57.247 13:48:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78620 00:28:57.247 13:48:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78620 00:28:57.247 13:48:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:57.247 [2024-11-20 13:48:56.626389] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:28:57.247 [2024-11-20 13:48:56.626558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79234 ] 00:28:57.505 [2024-11-20 13:48:56.802591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.505 [2024-11-20 13:48:56.904388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.875  [2024-11-20T13:48:59.234Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-20T13:49:00.234Z] Copying: 402/1024 [MB] (208 MBps) [2024-11-20T13:49:01.203Z] Copying: 651/1024 [MB] (248 MBps) [2024-11-20T13:49:01.769Z] Copying: 897/1024 [MB] (245 MBps) [2024-11-20T13:49:02.335Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:29:02.908 00:29:02.908 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78620 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:02.908 13:49:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.908 [2024-11-20 13:49:02.324685] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:29:02.908 [2024-11-20 13:49:02.324788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79298 ] 00:29:03.166 [2024-11-20 13:49:02.480336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.166 [2024-11-20 13:49:02.578349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.426 [2024-11-20 13:49:02.836748] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.426 [2024-11-20 13:49:02.836808] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.702 [2024-11-20 13:49:02.900653] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:03.702 [2024-11-20 13:49:02.900843] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:03.702 [2024-11-20 13:49:02.901253] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:03.702 [2024-11-20 13:49:03.081559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.702 [2024-11-20 13:49:03.081614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:03.702 [2024-11-20 13:49:03.081627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.702 [2024-11-20 13:49:03.081635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.702 [2024-11-20 13:49:03.081683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.702 [2024-11-20 13:49:03.081694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.702 [2024-11-20 13:49:03.081702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:03.702 [2024-11-20 13:49:03.081708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.702 [2024-11-20 13:49:03.081727] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:03.702 [2024-11-20 13:49:03.082394] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:03.702 [2024-11-20 13:49:03.082416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.702 [2024-11-20 13:49:03.082423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.702 [2024-11-20 13:49:03.082432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:29:03.702 [2024-11-20 13:49:03.082439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.702 [2024-11-20 13:49:03.083731] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:03.702 [2024-11-20 13:49:03.095740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.702 [2024-11-20 13:49:03.095777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:03.702 [2024-11-20 13:49:03.095789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.010 ms 00:29:03.702 [2024-11-20 13:49:03.095797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.095855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.095865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:03.703 [2024-11-20 13:49:03.095874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:03.703 [2024-11-20 13:49:03.095881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.100630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.100661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.703 [2024-11-20 13:49:03.100670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:29:03.703 [2024-11-20 13:49:03.100677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.100747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.100756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.703 [2024-11-20 13:49:03.100764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:03.703 [2024-11-20 13:49:03.100771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.100812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.100822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:03.703 [2024-11-20 13:49:03.100830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:03.703 [2024-11-20 13:49:03.100837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.100858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:03.703 [2024-11-20 13:49:03.103999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.104027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.703 [2024-11-20 13:49:03.104037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:29:03.703 [2024-11-20 13:49:03.104044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.104071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.104079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:03.703 [2024-11-20 13:49:03.104087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:03.703 [2024-11-20 13:49:03.104094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.104116] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:03.703 [2024-11-20 13:49:03.104133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:03.703 [2024-11-20 13:49:03.104166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:03.703 [2024-11-20 13:49:03.104180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:03.703 [2024-11-20 13:49:03.104282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:03.703 [2024-11-20 13:49:03.104297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:03.703 [2024-11-20 13:49:03.104308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:03.703 [2024-11-20 13:49:03.104318] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104338] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:03.703 [2024-11-20 13:49:03.104345] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:03.703 [2024-11-20 13:49:03.104352] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:03.703 [2024-11-20 13:49:03.104359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:03.703 [2024-11-20 13:49:03.104367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.104374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:03.703 [2024-11-20 13:49:03.104381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:29:03.703 [2024-11-20 13:49:03.104388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.104469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.703 [2024-11-20 13:49:03.104480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:03.703 [2024-11-20 13:49:03.104487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:03.703 [2024-11-20 13:49:03.104494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.703 [2024-11-20 13:49:03.104593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:03.703 [2024-11-20 13:49:03.104610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:03.703 [2024-11-20 13:49:03.104618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:03.703 [2024-11-20 13:49:03.104640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:03.703 [2024-11-20 13:49:03.104661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.703 [2024-11-20 13:49:03.104673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:03.703 [2024-11-20 13:49:03.104686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:03.703 [2024-11-20 13:49:03.104692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.703 [2024-11-20 13:49:03.104699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:03.703 [2024-11-20 13:49:03.104705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:03.703 [2024-11-20 13:49:03.104713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:03.703 [2024-11-20 13:49:03.104727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:03.703 [2024-11-20 13:49:03.104747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:03.703 [2024-11-20 13:49:03.104766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:03.703 [2024-11-20 13:49:03.104784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:03.703 [2024-11-20 13:49:03.104803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.703 [2024-11-20 13:49:03.104815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:03.703 [2024-11-20 13:49:03.104822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.703 [2024-11-20 13:49:03.104834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:03.703 [2024-11-20 13:49:03.104840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:03.703 [2024-11-20 13:49:03.104846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.703 [2024-11-20 13:49:03.104853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:03.703 [2024-11-20 13:49:03.104859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:03.703 [2024-11-20 13:49:03.104865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:03.703 [2024-11-20 13:49:03.104885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:03.703 [2024-11-20 13:49:03.104891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.703 [2024-11-20 13:49:03.104898] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:03.703 [2024-11-20 13:49:03.104906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:03.704 [2024-11-20 13:49:03.104914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.704 [2024-11-20 13:49:03.104923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.704 [2024-11-20 13:49:03.104931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:03.704 [2024-11-20 13:49:03.104938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:03.704 [2024-11-20 13:49:03.104945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:03.704 [2024-11-20 13:49:03.104951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:03.704 [2024-11-20 13:49:03.104958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:03.704 [2024-11-20 13:49:03.104964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:03.704 [2024-11-20 13:49:03.104983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:03.704 [2024-11-20 13:49:03.104992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:03.704 [2024-11-20 13:49:03.105007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:03.704 [2024-11-20 13:49:03.105015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:03.704 [2024-11-20 13:49:03.105022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:03.704 [2024-11-20 13:49:03.105029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:03.704 [2024-11-20 13:49:03.105036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:03.704 [2024-11-20 13:49:03.105043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:03.704 [2024-11-20 13:49:03.105050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:03.704 [2024-11-20 13:49:03.105057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:03.704 [2024-11-20 13:49:03.105064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:03.704 [2024-11-20 13:49:03.105098] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:03.704 [2024-11-20 13:49:03.105106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:03.704 [2024-11-20 13:49:03.105121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:03.704 [2024-11-20 13:49:03.105128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:03.704 [2024-11-20 13:49:03.105135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:03.704 [2024-11-20 13:49:03.105142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.704 [2024-11-20 13:49:03.105149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:03.704 [2024-11-20 13:49:03.105156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:29:03.704 [2024-11-20 13:49:03.105163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.130875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.130913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.962 [2024-11-20 13:49:03.130923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.656 ms 00:29:03.962 [2024-11-20 13:49:03.130931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.131038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.131051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.962 [2024-11-20 13:49:03.131059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:03.962 [2024-11-20 13:49:03.131067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.179060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.179105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.962 [2024-11-20 13:49:03.179122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.935 ms 00:29:03.962 [2024-11-20 13:49:03.179130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.179185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.179195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.962 [2024-11-20 13:49:03.179205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:03.962 [2024-11-20 13:49:03.179212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.179576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.179593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.962 [2024-11-20 13:49:03.179602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:29:03.962 [2024-11-20 13:49:03.179609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.179741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.179750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.962 [2024-11-20 13:49:03.179758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:03.962 [2024-11-20 13:49:03.179765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.192804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.192941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.962 [2024-11-20 13:49:03.193077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.020 ms 00:29:03.962 [2024-11-20 13:49:03.193101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.205566] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:03.962 [2024-11-20 13:49:03.205704] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.962 [2024-11-20 13:49:03.205767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.205881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.962 [2024-11-20 13:49:03.205912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.541 ms 00:29:03.962 [2024-11-20 13:49:03.206284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.230604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.230720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.962 [2024-11-20 13:49:03.230789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.203 ms 00:29:03.962 [2024-11-20 13:49:03.231227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.243025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.243133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.962 [2024-11-20 13:49:03.243187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.697 ms 00:29:03.962 [2024-11-20 13:49:03.243210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.254429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.962 [2024-11-20 13:49:03.254534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.962 [2024-11-20 13:49:03.254582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.157 ms 00:29:03.962 [2024-11-20 13:49:03.254604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.962 [2024-11-20 13:49:03.255322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.255420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.963 [2024-11-20 13:49:03.255476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:29:03.963 [2024-11-20 13:49:03.255497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.309723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.309867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.963 [2024-11-20 13:49:03.309938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.194 ms 00:29:03.963 [2024-11-20 13:49:03.309962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.320698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.963 [2024-11-20 13:49:03.323414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.323510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.963 [2024-11-20 13:49:03.323557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.382 ms 00:29:03.963 [2024-11-20 13:49:03.323579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.323704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.323832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.963 [2024-11-20 13:49:03.323879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:03.963 [2024-11-20 13:49:03.323900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.323998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.324026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.963 [2024-11-20 13:49:03.324091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:03.963 [2024-11-20 13:49:03.324113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.324147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.324206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.963 [2024-11-20 13:49:03.324229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.963 [2024-11-20 13:49:03.324248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.324290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.963 [2024-11-20 13:49:03.324315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.324333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.963 [2024-11-20 13:49:03.324357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:03.963 [2024-11-20 13:49:03.324375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.347449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.347573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.963 [2024-11-20 13:49:03.347626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.040 ms 00:29:03.963 [2024-11-20 13:49:03.347665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.347769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.963 [2024-11-20 13:49:03.347816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.963 [2024-11-20 13:49:03.347880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:03.963 [2024-11-20 13:49:03.347902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.963 [2024-11-20 13:49:03.348871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.909 ms, result 0 00:29:05.380  [2024-11-20T13:49:05.390Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-20T13:49:06.761Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-20T13:49:07.702Z] Copying: 137/1024 [MB] (45 MBps) [2024-11-20T13:49:08.637Z] Copying: 183/1024 [MB] (46 MBps) [2024-11-20T13:49:09.591Z] Copying: 225/1024 [MB] (41 MBps) [2024-11-20T13:49:10.525Z] Copying: 271/1024 [MB] (46 MBps) [2024-11-20T13:49:11.455Z] Copying: 319/1024 [MB] (47 MBps) [2024-11-20T13:49:12.388Z] Copying: 363/1024 [MB] (44 MBps) [2024-11-20T13:49:13.761Z] Copying: 409/1024 [MB] (45 MBps) [2024-11-20T13:49:14.693Z] Copying: 454/1024 [MB] (44 MBps) [2024-11-20T13:49:15.659Z] Copying: 497/1024 [MB] (43 MBps) [2024-11-20T13:49:16.593Z] Copying: 542/1024 [MB] (44 MBps) [2024-11-20T13:49:17.527Z] Copying: 587/1024 [MB] (45 MBps) [2024-11-20T13:49:18.460Z] Copying: 634/1024 [MB] (46 MBps) [2024-11-20T13:49:19.393Z] Copying: 679/1024 [MB] (45 MBps) [2024-11-20T13:49:20.782Z] Copying: 725/1024 [MB] (45 MBps) [2024-11-20T13:49:21.713Z] Copying: 769/1024 [MB] (44 MBps) [2024-11-20T13:49:22.645Z] Copying: 815/1024 [MB] (45 MBps) [2024-11-20T13:49:23.629Z] Copying: 858/1024 [MB] (42 MBps) [2024-11-20T13:49:24.562Z] Copying: 903/1024 [MB] (45 MBps) [2024-11-20T13:49:25.496Z] Copying: 949/1024 [MB] (45 MBps) [2024-11-20T13:49:26.431Z] Copying: 996/1024 [MB] (46 MBps) [2024-11-20T13:49:27.365Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-20T13:49:27.365Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-20 13:49:27.059561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.059620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:27.938 [2024-11-20 13:49:27.059636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:27.938 [2024-11-20 13:49:27.059644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.062770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:27.938 [2024-11-20 13:49:27.068339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.068373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:27.938 [2024-11-20 13:49:27.068385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.377 ms 00:29:27.938 [2024-11-20 13:49:27.068395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.078641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.078674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:27.938 [2024-11-20 13:49:27.078685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.260 ms 00:29:27.938 [2024-11-20 13:49:27.078693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.096247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.096288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:27.938 [2024-11-20 13:49:27.096299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:29:27.938 [2024-11-20 13:49:27.096307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.102484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.102519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:27.938 [2024-11-20 13:49:27.102528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:29:27.938 [2024-11-20 13:49:27.102535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.125612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.125780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.938 [2024-11-20 13:49:27.125798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.028 ms 00:29:27.938 [2024-11-20 13:49:27.125806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.139731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.139769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.938 [2024-11-20 13:49:27.139781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.889 ms 00:29:27.938 [2024-11-20 13:49:27.139790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.192729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.192801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.938 [2024-11-20 13:49:27.192823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.896 ms 00:29:27.938 [2024-11-20 13:49:27.192831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.216502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.216549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:27.938 [2024-11-20 13:49:27.216561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.655 ms 00:29:27.938 [2024-11-20 13:49:27.216568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.239073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.239274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:27.938 [2024-11-20 13:49:27.239293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.467 ms 00:29:27.938 [2024-11-20 13:49:27.239300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.261793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.261837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.938 [2024-11-20 13:49:27.261848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.456 ms 00:29:27.938 [2024-11-20 13:49:27.261856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.283712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.938 [2024-11-20 13:49:27.283753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.938 [2024-11-20 13:49:27.283765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.796 ms 00:29:27.938 [2024-11-20 13:49:27.283772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.938 [2024-11-20 13:49:27.283808] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.938 [2024-11-20 13:49:27.283823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:29:27.938 [2024-11-20 13:49:27.283834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.283992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.939 [2024-11-20 13:49:27.284520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.940 [2024-11-20 13:49:27.284605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.940 [2024-11-20 13:49:27.284613] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe 00:29:27.940 [2024-11-20 13:49:27.284621] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:29:27.940 [2024-11-20 13:49:27.284632] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:29:27.940 [2024-11-20 13:49:27.284646] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:29:27.940 [2024-11-20 13:49:27.284654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:29:27.940 [2024-11-20 13:49:27.284661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.940 [2024-11-20 13:49:27.284668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.940 [2024-11-20 13:49:27.284675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.940 [2024-11-20 13:49:27.284682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.940 [2024-11-20 13:49:27.284688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.940 [2024-11-20 13:49:27.284695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.940 [2024-11-20 13:49:27.284703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.940 [2024-11-20 13:49:27.284711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:29:27.940 [2024-11-20 13:49:27.284718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.296693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.940 [2024-11-20 13:49:27.296730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.940 [2024-11-20 13:49:27.296741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.959 ms 00:29:27.940 [2024-11-20 13:49:27.296749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.297139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.940 [2024-11-20 13:49:27.297161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.940 [2024-11-20 13:49:27.297170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:29:27.940 [2024-11-20 13:49:27.297188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.329642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.940 [2024-11-20 13:49:27.329689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.940 [2024-11-20 13:49:27.329700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.940 [2024-11-20 13:49:27.329708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.329775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.940 [2024-11-20 13:49:27.329783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.940 [2024-11-20 13:49:27.329791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.940 [2024-11-20 13:49:27.329800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.329863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.940 [2024-11-20 13:49:27.329873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.940 [2024-11-20 13:49:27.329881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.940 [2024-11-20 13:49:27.329888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.940 [2024-11-20 13:49:27.329902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.940 [2024-11-20 13:49:27.329910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.940 [2024-11-20 13:49:27.329918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.940 [2024-11-20 13:49:27.329925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.198 [2024-11-20 13:49:27.408296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.198 [2024-11-20 13:49:27.408347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:28.198 [2024-11-20 13:49:27.408358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.198 [2024-11-20 13:49:27.408366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.198 [2024-11-20 13:49:27.471492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.198 [2024-11-20 13:49:27.471536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:28.198 [2024-11-20 13:49:27.471547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.198 [2024-11-20 13:49:27.471556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.198 [2024-11-20 13:49:27.471635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.198 [2024-11-20 13:49:27.471645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:28.198 [2024-11-20 13:49:27.471654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.198 [2024-11-20 13:49:27.471662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.198 [2024-11-20 13:49:27.471697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.198 [2024-11-20 13:49:27.471706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:28.198 [2024-11-20 13:49:27.471713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.198 [2024-11-20 13:49:27.471720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.198 [2024-11-20 13:49:27.471805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.198 [2024-11-20 13:49:27.471815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:28.199 [2024-11-20 13:49:27.471823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.199 [2024-11-20 13:49:27.471830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.199 [2024-11-20 13:49:27.471858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.199 [2024-11-20 13:49:27.471867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:28.199 [2024-11-20 13:49:27.471874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.199 [2024-11-20 13:49:27.471882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.199 [2024-11-20 13:49:27.471912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.199 [2024-11-20 13:49:27.471923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:28.199 [2024-11-20 13:49:27.471930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.199 [2024-11-20 13:49:27.471938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.199 [2024-11-20 13:49:27.472000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.199 [2024-11-20 13:49:27.472011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:28.199 [2024-11-20 13:49:27.472019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.199 [2024-11-20 13:49:27.472026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.199 [2024-11-20 13:49:27.472135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 414.549 ms, result 0 00:29:29.571 00:29:29.571 00:29:29.571 13:49:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:32.195 13:49:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.195 [2024-11-20 13:49:31.104474] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:29:32.195 [2024-11-20 13:49:31.104717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79588 ] 00:29:32.195 [2024-11-20 13:49:31.260162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.195 [2024-11-20 13:49:31.345399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.195 [2024-11-20 13:49:31.564704] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.195 [2024-11-20 13:49:31.564765] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.455 [2024-11-20 13:49:31.716683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.716743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:32.455 [2024-11-20 13:49:31.716759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:32.455 [2024-11-20 13:49:31.716768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.716820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.716830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.455 [2024-11-20 13:49:31.716840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:32.455 [2024-11-20 13:49:31.716848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.716867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:32.455 [2024-11-20 13:49:31.717631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:32.455 [2024-11-20 13:49:31.717659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.717667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.455 [2024-11-20 13:49:31.717676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:29:32.455 [2024-11-20 13:49:31.717683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.718777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:32.455 [2024-11-20 13:49:31.730893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.731068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:32.455 [2024-11-20 13:49:31.731087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.117 ms 00:29:32.455 [2024-11-20 13:49:31.731096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.731152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.731162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:32.455 [2024-11-20 13:49:31.731170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:32.455 [2024-11-20 13:49:31.731177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.736225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.736259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.455 [2024-11-20 13:49:31.736270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.990 ms 00:29:32.455 [2024-11-20 13:49:31.736281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.736360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.736369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.455 [2024-11-20 13:49:31.736377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:32.455 [2024-11-20 13:49:31.736384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.736426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.736436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:32.455 [2024-11-20 13:49:31.736444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:32.455 [2024-11-20 13:49:31.736451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.736475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:32.455 [2024-11-20 13:49:31.739799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.739827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.455 [2024-11-20 13:49:31.739836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.333 ms 00:29:32.455 [2024-11-20 13:49:31.739847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.739876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.739884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:32.455 [2024-11-20 13:49:31.739891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:32.455 [2024-11-20 13:49:31.739899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.739918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:32.455 [2024-11-20 13:49:31.739935] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:32.455 [2024-11-20 13:49:31.739985] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:32.455 [2024-11-20 13:49:31.740003] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:32.455 [2024-11-20 13:49:31.740105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:32.455 [2024-11-20 13:49:31.740115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:32.455 [2024-11-20 13:49:31.740125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:32.455 [2024-11-20 13:49:31.740135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:32.455 [2024-11-20 13:49:31.740159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:32.455 [2024-11-20 13:49:31.740167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:32.455 [2024-11-20 13:49:31.740176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:32.455 [2024-11-20 13:49:31.740184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.740191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:32.455 [2024-11-20 13:49:31.740198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:29:32.455 [2024-11-20 13:49:31.740205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.740287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.455 [2024-11-20 13:49:31.740295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:32.455 [2024-11-20 13:49:31.740302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:32.455 [2024-11-20 13:49:31.740310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.455 [2024-11-20 13:49:31.740429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:32.455 [2024-11-20 13:49:31.740439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:32.455 [2024-11-20 13:49:31.740448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:32.455 [2024-11-20 13:49:31.740469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:32.455 [2024-11-20 13:49:31.740490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.455 [2024-11-20 13:49:31.740503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:32.455 [2024-11-20 13:49:31.740510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:32.455 [2024-11-20 13:49:31.740517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.455 [2024-11-20 13:49:31.740523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:32.455 [2024-11-20 13:49:31.740530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:32.455 [2024-11-20 13:49:31.740542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:32.455 [2024-11-20 13:49:31.740555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:32.455 [2024-11-20 13:49:31.740576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:32.455 [2024-11-20 13:49:31.740596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.455 [2024-11-20 13:49:31.740608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:32.455 [2024-11-20 13:49:31.740615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:32.455 [2024-11-20 13:49:31.740621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.456 [2024-11-20 13:49:31.740628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:32.456 [2024-11-20 13:49:31.740634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:32.456 [2024-11-20 13:49:31.740640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.456 [2024-11-20 13:49:31.740646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:32.456 [2024-11-20 13:49:31.740653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:32.456 [2024-11-20 13:49:31.740659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.456 [2024-11-20 13:49:31.740666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:32.456 [2024-11-20 13:49:31.740672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:32.456 [2024-11-20 13:49:31.740679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.456 [2024-11-20 13:49:31.740685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:32.456 [2024-11-20 13:49:31.740692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:32.456 [2024-11-20 13:49:31.740698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.456 [2024-11-20 13:49:31.740704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:32.456 [2024-11-20 13:49:31.740710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:32.456 [2024-11-20 13:49:31.740717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.456 [2024-11-20 13:49:31.740723] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:32.456 [2024-11-20 13:49:31.740731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:32.456 [2024-11-20 13:49:31.740738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.456 [2024-11-20 13:49:31.740745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.456 [2024-11-20 13:49:31.740752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:32.456 [2024-11-20 13:49:31.740759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:32.456 [2024-11-20 13:49:31.740766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:32.456 [2024-11-20 13:49:31.740773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:32.456 [2024-11-20 13:49:31.740779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:32.456 [2024-11-20 13:49:31.740786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:32.456 [2024-11-20 13:49:31.740795] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:32.456 [2024-11-20 13:49:31.740804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:32.456 [2024-11-20 13:49:31.740819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:32.456 [2024-11-20 13:49:31.740827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:32.456 [2024-11-20 13:49:31.740834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:32.456 [2024-11-20 13:49:31.740841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:32.456 [2024-11-20 13:49:31.740848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:32.456 [2024-11-20 13:49:31.740854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:32.456 [2024-11-20 13:49:31.740861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:32.456 [2024-11-20 13:49:31.740868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:32.456 [2024-11-20 13:49:31.740875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:32.456 [2024-11-20 13:49:31.740921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:32.456 [2024-11-20 13:49:31.740931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.456 [2024-11-20 13:49:31.740948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:32.456 [2024-11-20 13:49:31.740956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:32.456 [2024-11-20 13:49:31.740963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:32.456 [2024-11-20 13:49:31.740981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.740989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:32.456 [2024-11-20 13:49:31.740997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:29:32.456 [2024-11-20 13:49:31.741004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.767111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.767250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.456 [2024-11-20 13:49:31.767303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.063 ms 00:29:32.456 [2024-11-20 13:49:31.767325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.767430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.767452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:32.456 [2024-11-20 13:49:31.767472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:32.456 [2024-11-20 13:49:31.767490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.813998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.814175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.456 [2024-11-20 13:49:31.814235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.436 ms 00:29:32.456 [2024-11-20 13:49:31.814258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.814321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.814344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.456 [2024-11-20 13:49:31.814369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:32.456 [2024-11-20 13:49:31.814387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.814797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.814866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.456 [2024-11-20 13:49:31.814901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:29:32.456 [2024-11-20 13:49:31.814981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.815163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.815257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.456 [2024-11-20 13:49:31.815283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:29:32.456 [2024-11-20 13:49:31.815308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.828669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.828795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.456 [2024-11-20 13:49:31.828855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.300 ms 00:29:32.456 [2024-11-20 13:49:31.828878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.841120] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:32.456 [2024-11-20 13:49:31.841267] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:32.456 [2024-11-20 13:49:31.841327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.841348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:32.456 [2024-11-20 13:49:31.841370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.291 ms 00:29:32.456 [2024-11-20 13:49:31.841388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.865434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.865593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:32.456 [2024-11-20 13:49:31.865646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.994 ms 00:29:32.456 [2024-11-20 13:49:31.865668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.456 [2024-11-20 13:49:31.877243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.456 [2024-11-20 13:49:31.877391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:32.456 [2024-11-20 13:49:31.877444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.515 ms 00:29:32.456 [2024-11-20 13:49:31.877466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.889388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.889548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:32.715 [2024-11-20 13:49:31.889605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.841 ms 00:29:32.715 [2024-11-20 13:49:31.889627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.890294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.890386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:32.715 [2024-11-20 13:49:31.890436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:29:32.715 [2024-11-20 13:49:31.890462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.945770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.945958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:32.715 [2024-11-20 13:49:31.946109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.276 ms 00:29:32.715 [2024-11-20 13:49:31.946204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.956956] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:32.715 [2024-11-20 13:49:31.959712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.959816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:32.715 [2024-11-20 13:49:31.959866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.447 ms 00:29:32.715 [2024-11-20 13:49:31.959889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.960022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.960051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:32.715 [2024-11-20 13:49:31.960071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:32.715 [2024-11-20 13:49:31.960123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.961628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.961730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:32.715 [2024-11-20 13:49:31.961781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:29:32.715 [2024-11-20 13:49:31.961803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.961844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.961984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:32.715 [2024-11-20 13:49:31.962018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:32.715 [2024-11-20 13:49:31.962084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.962139] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:32.715 [2024-11-20 13:49:31.962163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.962199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:32.715 [2024-11-20 13:49:31.962219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:32.715 [2024-11-20 13:49:31.962265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.985504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.985638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:32.715 [2024-11-20 13:49:31.985692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.204 ms 00:29:32.715 [2024-11-20 13:49:31.985720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.986029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.715 [2024-11-20 13:49:31.986052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:32.715 [2024-11-20 13:49:31.986062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:32.715 [2024-11-20 13:49:31.986070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.715 [2024-11-20 13:49:31.986965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.883 ms, result 0 00:29:34.089  [2024-11-20T13:49:34.461Z] Copying: 1212/1048576 [kB] (1212 kBps) [2024-11-20T13:49:35.410Z] Copying: 5764/1048576 [kB] (4552 kBps) [2024-11-20T13:49:36.343Z] Copying: 54/1024 [MB] (48 MBps) [2024-11-20T13:49:37.276Z] Copying: 104/1024 [MB] (49 MBps) [2024-11-20T13:49:38.228Z] Copying: 155/1024 [MB] (51 MBps) [2024-11-20T13:49:39.197Z] Copying: 207/1024 [MB] (52 MBps) [2024-11-20T13:49:40.570Z] Copying: 260/1024 [MB] (52 MBps) [2024-11-20T13:49:41.504Z] Copying: 314/1024 [MB] (54 MBps) [2024-11-20T13:49:42.435Z] Copying: 367/1024 [MB] (52 MBps) [2024-11-20T13:49:43.368Z] Copying: 420/1024 [MB] (53 MBps) [2024-11-20T13:49:44.302Z] Copying: 473/1024 [MB] (52 MBps) [2024-11-20T13:49:45.235Z] Copying: 526/1024 [MB] (52 MBps) [2024-11-20T13:49:46.608Z] Copying: 574/1024 [MB] (48 MBps) [2024-11-20T13:49:47.173Z] Copying: 628/1024 [MB] (53 MBps) [2024-11-20T13:49:48.546Z] Copying: 677/1024 [MB] (49 MBps) [2024-11-20T13:49:49.479Z] Copying: 729/1024 [MB] (52 MBps) [2024-11-20T13:49:50.506Z] Copying: 780/1024 [MB] (50 MBps) [2024-11-20T13:49:51.439Z] Copying: 832/1024 [MB] (52 MBps) [2024-11-20T13:49:52.373Z] Copying: 885/1024 [MB] (53 MBps) [2024-11-20T13:49:53.304Z] Copying: 939/1024 [MB] (53 MBps) [2024-11-20T13:49:53.891Z] Copying: 991/1024 [MB] (52 MBps) [2024-11-20T13:49:54.458Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 13:49:54.359933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.360016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:55.031 [2024-11-20 13:49:54.360031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:55.031 [2024-11-20 13:49:54.360039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.360061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:55.031 [2024-11-20 13:49:54.362643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.362675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:55.031 [2024-11-20 13:49:54.362686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:29:55.031 [2024-11-20 13:49:54.362695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.362913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.362924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:55.031 [2024-11-20 13:49:54.362935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:29:55.031 [2024-11-20 13:49:54.362942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.371213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.371248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:55.031 [2024-11-20 13:49:54.371259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.256 ms 00:29:55.031 [2024-11-20 13:49:54.371266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.377548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.377695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:55.031 [2024-11-20 13:49:54.377718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:29:55.031 [2024-11-20 13:49:54.377726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.402056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.402105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:55.031 [2024-11-20 13:49:54.402118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:29:55.031 [2024-11-20 13:49:54.402125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.415743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.415797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:55.031 [2024-11-20 13:49:54.415810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:29:55.031 [2024-11-20 13:49:54.415818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.417800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.417844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:55.031 [2024-11-20 13:49:54.417857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:29:55.031 [2024-11-20 13:49:54.417866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.031 [2024-11-20 13:49:54.446269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.031 [2024-11-20 13:49:54.446310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:55.031 [2024-11-20 13:49:54.446321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.380 ms 00:29:55.031 [2024-11-20 13:49:54.446329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.291 [2024-11-20 13:49:54.469121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.291 [2024-11-20 13:49:54.469159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:55.291 [2024-11-20 13:49:54.469181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.756 ms 00:29:55.291 [2024-11-20 13:49:54.469189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.291 [2024-11-20 13:49:54.491455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.291 [2024-11-20 13:49:54.491622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:55.291 [2024-11-20 13:49:54.491638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.231 ms 00:29:55.291 [2024-11-20 13:49:54.491647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.291 [2024-11-20 13:49:54.513427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.291 [2024-11-20 13:49:54.513461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:55.291 [2024-11-20 13:49:54.513471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.724 ms 00:29:55.291 [2024-11-20 13:49:54.513478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.291 [2024-11-20 13:49:54.513509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:55.291 [2024-11-20 13:49:54.513524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:55.291 [2024-11-20 13:49:54.513534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:55.291 [2024-11-20 13:49:54.513542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.513994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:55.291 [2024-11-20 13:49:54.514077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:55.292 [2024-11-20 13:49:54.514315] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:55.292 [2024-11-20 13:49:54.514322] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe 00:29:55.292 [2024-11-20 13:49:54.514330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:55.292 [2024-11-20 13:49:54.514337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:29:55.292 [2024-11-20 13:49:54.514344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:29:55.292 [2024-11-20 13:49:54.514356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:29:55.292 [2024-11-20 13:49:54.514363] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:55.292 [2024-11-20 13:49:54.514371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:55.292 [2024-11-20 13:49:54.514378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:55.292 [2024-11-20 13:49:54.514391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:55.292 [2024-11-20 13:49:54.514397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:55.292 [2024-11-20 13:49:54.514404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.292 [2024-11-20 13:49:54.514412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:55.292 [2024-11-20 13:49:54.514420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:29:55.292 [2024-11-20 13:49:54.514427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.526477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.292 [2024-11-20 13:49:54.526510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:55.292 [2024-11-20 13:49:54.526522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.034 ms 00:29:55.292 [2024-11-20 13:49:54.526530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.526862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.292 [2024-11-20 13:49:54.526874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:55.292 [2024-11-20 13:49:54.526883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:29:55.292 [2024-11-20 13:49:54.526890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.558827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.558862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.292 [2024-11-20 13:49:54.558872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.558879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.558938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.558947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.292 [2024-11-20 13:49:54.558955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.558962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.559036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.559047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.292 [2024-11-20 13:49:54.559055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.559062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.559076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.559084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.292 [2024-11-20 13:49:54.559091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.559098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.636354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.636410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.292 [2024-11-20 13:49:54.636422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.636430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.292 [2024-11-20 13:49:54.698570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.292 [2024-11-20 13:49:54.698674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.292 [2024-11-20 13:49:54.698730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.292 [2024-11-20 13:49:54.698845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:55.292 [2024-11-20 13:49:54.698895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.698935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.698944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.292 [2024-11-20 13:49:54.698951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.698962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.699012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.292 [2024-11-20 13:49:54.699023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.292 [2024-11-20 13:49:54.699031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.292 [2024-11-20 13:49:54.699038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.292 [2024-11-20 13:49:54.699145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.185 ms, result 0 00:29:56.227 00:29:56.227 00:29:56.227 13:49:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:58.125 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:58.125 13:49:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:58.383 [2024-11-20 13:49:57.592008] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:29:58.383 [2024-11-20 13:49:57.592131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79859 ] 00:29:58.383 [2024-11-20 13:49:57.750176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.640 [2024-11-20 13:49:57.849750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.900 [2024-11-20 13:49:58.108098] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.900 [2024-11-20 13:49:58.108166] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.900 [2024-11-20 13:49:58.261821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.261875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:58.900 [2024-11-20 13:49:58.261892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:58.900 [2024-11-20 13:49:58.261900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.261948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.261958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:58.900 [2024-11-20 13:49:58.261986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:58.900 [2024-11-20 13:49:58.261994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.262014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:58.900 [2024-11-20 13:49:58.262769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:58.900 [2024-11-20 13:49:58.262799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.262807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:58.900 [2024-11-20 13:49:58.262816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:29:58.900 [2024-11-20 13:49:58.262823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.263881] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:58.900 [2024-11-20 13:49:58.276146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.276184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:58.900 [2024-11-20 13:49:58.276196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:29:58.900 [2024-11-20 13:49:58.276204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.276270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.276279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:58.900 [2024-11-20 13:49:58.276288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:58.900 [2024-11-20 13:49:58.276295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.281371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.281405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:58.900 [2024-11-20 13:49:58.281416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:29:58.900 [2024-11-20 13:49:58.281430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.281509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.281518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:58.900 [2024-11-20 13:49:58.281526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:58.900 [2024-11-20 13:49:58.281534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.281576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.281585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:58.900 [2024-11-20 13:49:58.281594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:58.900 [2024-11-20 13:49:58.281601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.281625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:58.900 [2024-11-20 13:49:58.284997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.285023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:58.900 [2024-11-20 13:49:58.285032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:29:58.900 [2024-11-20 13:49:58.285042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.285071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.285079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:58.900 [2024-11-20 13:49:58.285087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:58.900 [2024-11-20 13:49:58.285094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.285113] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:58.900 [2024-11-20 13:49:58.285130] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:58.900 [2024-11-20 13:49:58.285164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:58.900 [2024-11-20 13:49:58.285180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:58.900 [2024-11-20 13:49:58.285280] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:58.900 [2024-11-20 13:49:58.285290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:58.900 [2024-11-20 13:49:58.285300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:58.900 [2024-11-20 13:49:58.285310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:58.900 [2024-11-20 13:49:58.285333] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:58.900 [2024-11-20 13:49:58.285341] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:58.900 [2024-11-20 13:49:58.285350] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:58.900 [2024-11-20 13:49:58.285357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.285365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:58.900 [2024-11-20 13:49:58.285372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:29:58.900 [2024-11-20 13:49:58.285379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.285461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.900 [2024-11-20 13:49:58.285468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:58.900 [2024-11-20 13:49:58.285475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:58.900 [2024-11-20 13:49:58.285482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.900 [2024-11-20 13:49:58.285597] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:58.900 [2024-11-20 13:49:58.285607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:58.900 [2024-11-20 13:49:58.285615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:58.900 [2024-11-20 13:49:58.285637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:58.900 [2024-11-20 13:49:58.285658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.900 [2024-11-20 13:49:58.285672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:58.900 [2024-11-20 13:49:58.285678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:58.900 [2024-11-20 13:49:58.285684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.900 [2024-11-20 13:49:58.285690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:58.900 [2024-11-20 13:49:58.285697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:58.900 [2024-11-20 13:49:58.285709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:58.900 [2024-11-20 13:49:58.285723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:58.900 [2024-11-20 13:49:58.285742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:58.900 [2024-11-20 13:49:58.285760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:58.900 [2024-11-20 13:49:58.285779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.900 [2024-11-20 13:49:58.285792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:58.900 [2024-11-20 13:49:58.285798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:58.900 [2024-11-20 13:49:58.285804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.901 [2024-11-20 13:49:58.285810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:58.901 [2024-11-20 13:49:58.285816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:58.901 [2024-11-20 13:49:58.285822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.901 [2024-11-20 13:49:58.285828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:58.901 [2024-11-20 13:49:58.285835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:58.901 [2024-11-20 13:49:58.285842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.901 [2024-11-20 13:49:58.285848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:58.901 [2024-11-20 13:49:58.285854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:58.901 [2024-11-20 13:49:58.285860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.901 [2024-11-20 13:49:58.285866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:58.901 [2024-11-20 13:49:58.285872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:58.901 [2024-11-20 13:49:58.285879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.901 [2024-11-20 13:49:58.285885] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:58.901 [2024-11-20 13:49:58.285892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:58.901 [2024-11-20 13:49:58.285899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.901 [2024-11-20 13:49:58.285907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.901 [2024-11-20 13:49:58.285915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:58.901 [2024-11-20 13:49:58.285921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:58.901 [2024-11-20 13:49:58.285928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:58.901 [2024-11-20 13:49:58.285934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:58.901 [2024-11-20 13:49:58.285940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:58.901 [2024-11-20 13:49:58.285947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:58.901 [2024-11-20 13:49:58.285954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:58.901 [2024-11-20 13:49:58.285963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.285990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:58.901 [2024-11-20 13:49:58.285997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:58.901 [2024-11-20 13:49:58.286004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:58.901 [2024-11-20 13:49:58.286011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:58.901 [2024-11-20 13:49:58.286018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:58.901 [2024-11-20 13:49:58.286025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:58.901 [2024-11-20 13:49:58.286033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:58.901 [2024-11-20 13:49:58.286039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:58.901 [2024-11-20 13:49:58.286046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:58.901 [2024-11-20 13:49:58.286053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:58.901 [2024-11-20 13:49:58.286088] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:58.901 [2024-11-20 13:49:58.286098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:58.901 [2024-11-20 13:49:58.286114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:58.901 [2024-11-20 13:49:58.286121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:58.901 [2024-11-20 13:49:58.286128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:58.901 [2024-11-20 13:49:58.286135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.901 [2024-11-20 13:49:58.286142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:58.901 [2024-11-20 13:49:58.286149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:29:58.901 [2024-11-20 13:49:58.286158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.901 [2024-11-20 13:49:58.311638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.901 [2024-11-20 13:49:58.311789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:58.901 [2024-11-20 13:49:58.311806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.436 ms 00:29:58.901 [2024-11-20 13:49:58.311814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.901 [2024-11-20 13:49:58.311904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.901 [2024-11-20 13:49:58.311912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:58.901 [2024-11-20 13:49:58.311920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:58.901 [2024-11-20 13:49:58.311927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.358000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.358049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.159 [2024-11-20 13:49:58.358062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.004 ms 00:29:59.159 [2024-11-20 13:49:58.358070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.358122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.358131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.159 [2024-11-20 13:49:58.358143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:59.159 [2024-11-20 13:49:58.358150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.358530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.358547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.159 [2024-11-20 13:49:58.358556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:59.159 [2024-11-20 13:49:58.358563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.358690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.358700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.159 [2024-11-20 13:49:58.358708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:29:59.159 [2024-11-20 13:49:58.358719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.371771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.371806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.159 [2024-11-20 13:49:58.371819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:29:59.159 [2024-11-20 13:49:58.371827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.384196] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:59.159 [2024-11-20 13:49:58.384349] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:59.159 [2024-11-20 13:49:58.384365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.384373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:59.159 [2024-11-20 13:49:58.384383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.442 ms 00:29:59.159 [2024-11-20 13:49:58.384389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.408531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.408571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:59.159 [2024-11-20 13:49:58.408584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.103 ms 00:29:59.159 [2024-11-20 13:49:58.408591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.419681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.419806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:59.159 [2024-11-20 13:49:58.419821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.046 ms 00:29:59.159 [2024-11-20 13:49:58.419828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.430867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.430987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:59.159 [2024-11-20 13:49:58.431002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.006 ms 00:29:59.159 [2024-11-20 13:49:58.431009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.431614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.431635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.159 [2024-11-20 13:49:58.431644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:29:59.159 [2024-11-20 13:49:58.431653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.486500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.486692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:59.159 [2024-11-20 13:49:58.486717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.829 ms 00:29:59.159 [2024-11-20 13:49:58.486725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.497590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.159 [2024-11-20 13:49:58.500349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.500384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.159 [2024-11-20 13:49:58.500397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.579 ms 00:29:59.159 [2024-11-20 13:49:58.500407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.500516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.500527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:59.159 [2024-11-20 13:49:58.500535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:59.159 [2024-11-20 13:49:58.500545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.501218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.501355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.159 [2024-11-20 13:49:58.501418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:29:59.159 [2024-11-20 13:49:58.501441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.501514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.501537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.159 [2024-11-20 13:49:58.501558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:59.159 [2024-11-20 13:49:58.501598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.159 [2024-11-20 13:49:58.501675] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:59.159 [2024-11-20 13:49:58.501725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.159 [2024-11-20 13:49:58.501748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:59.160 [2024-11-20 13:49:58.501768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:59.160 [2024-11-20 13:49:58.501869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.160 [2024-11-20 13:49:58.524487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.160 [2024-11-20 13:49:58.524627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.160 [2024-11-20 13:49:58.524644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.568 ms 00:29:59.160 [2024-11-20 13:49:58.524657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.160 [2024-11-20 13:49:58.524726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.160 [2024-11-20 13:49:58.524736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.160 [2024-11-20 13:49:58.524744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:59.160 [2024-11-20 13:49:58.524751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.160 [2024-11-20 13:49:58.525709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.482 ms, result 0 00:30:00.531  [2024-11-20T13:50:00.945Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-20T13:50:01.877Z] Copying: 94/1024 [MB] (46 MBps) [2024-11-20T13:50:02.810Z] Copying: 139/1024 [MB] (45 MBps) [2024-11-20T13:50:03.746Z] Copying: 189/1024 [MB] (49 MBps) [2024-11-20T13:50:05.119Z] Copying: 238/1024 [MB] (48 MBps) [2024-11-20T13:50:06.052Z] Copying: 285/1024 [MB] (47 MBps) [2024-11-20T13:50:07.005Z] Copying: 329/1024 [MB] (43 MBps) [2024-11-20T13:50:07.936Z] Copying: 370/1024 [MB] (40 MBps) [2024-11-20T13:50:08.869Z] Copying: 415/1024 [MB] (45 MBps) [2024-11-20T13:50:09.802Z] Copying: 461/1024 [MB] (45 MBps) [2024-11-20T13:50:10.734Z] Copying: 505/1024 [MB] (43 MBps) [2024-11-20T13:50:12.108Z] Copying: 551/1024 [MB] (46 MBps) [2024-11-20T13:50:13.042Z] Copying: 597/1024 [MB] (46 MBps) [2024-11-20T13:50:13.986Z] Copying: 646/1024 [MB] (48 MBps) [2024-11-20T13:50:14.924Z] Copying: 692/1024 [MB] (46 MBps) [2024-11-20T13:50:15.857Z] Copying: 739/1024 [MB] (46 MBps) [2024-11-20T13:50:16.791Z] Copying: 785/1024 [MB] (46 MBps) [2024-11-20T13:50:17.732Z] Copying: 830/1024 [MB] (44 MBps) [2024-11-20T13:50:19.105Z] Copying: 877/1024 [MB] (47 MBps) [2024-11-20T13:50:20.043Z] Copying: 926/1024 [MB] (48 MBps) [2024-11-20T13:50:20.974Z] Copying: 972/1024 [MB] (46 MBps) [2024-11-20T13:50:20.974Z] Copying: 1010/1024 [MB] (37 MBps) [2024-11-20T13:50:21.232Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-20 13:50:21.067889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.805 [2024-11-20 13:50:21.067953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:21.805 [2024-11-20 13:50:21.067980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:21.805 [2024-11-20 13:50:21.067990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.805 [2024-11-20 13:50:21.068014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:21.805 [2024-11-20 13:50:21.071178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.805 [2024-11-20 13:50:21.071214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:21.805 [2024-11-20 13:50:21.071233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:30:21.805 [2024-11-20 13:50:21.071242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.805 [2024-11-20 13:50:21.071493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.805 [2024-11-20 13:50:21.071504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:21.805 [2024-11-20 13:50:21.071514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:30:21.805 [2024-11-20 13:50:21.071522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.805 [2024-11-20 13:50:21.075277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.805 [2024-11-20 13:50:21.075300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:21.805 [2024-11-20 13:50:21.075309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:30:21.805 [2024-11-20 13:50:21.075318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.805 [2024-11-20 13:50:21.081448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.805 [2024-11-20 13:50:21.081599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:21.805 [2024-11-20 13:50:21.081615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:30:21.805 [2024-11-20 13:50:21.081622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.105313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.105483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:21.806 [2024-11-20 13:50:21.105540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.632 ms 00:30:21.806 [2024-11-20 13:50:21.105687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.119127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.119248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:21.806 [2024-11-20 13:50:21.119307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:30:21.806 [2024-11-20 13:50:21.119330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.120459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.120560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:21.806 [2024-11-20 13:50:21.120615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:30:21.806 [2024-11-20 13:50:21.120639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.143532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.143671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:21.806 [2024-11-20 13:50:21.143720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.864 ms 00:30:21.806 [2024-11-20 13:50:21.143742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.166208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.166338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:21.806 [2024-11-20 13:50:21.166386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.433 ms 00:30:21.806 [2024-11-20 13:50:21.166408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.188423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.188538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:21.806 [2024-11-20 13:50:21.188646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.983 ms 00:30:21.806 [2024-11-20 13:50:21.188676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.210615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.806 [2024-11-20 13:50:21.210747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:21.806 [2024-11-20 13:50:21.210796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.881 ms 00:30:21.806 [2024-11-20 13:50:21.210818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.806 [2024-11-20 13:50:21.210852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:21.806 [2024-11-20 13:50:21.210878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:21.806 [2024-11-20 13:50:21.210914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:21.806 [2024-11-20 13:50:21.210944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.210988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.211995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:21.806 [2024-11-20 13:50:21.212924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.212992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.213992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.214000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:21.807 [2024-11-20 13:50:21.214016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:21.807 [2024-11-20 13:50:21.214028] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfa50ba-faa9-4dc9-a2f4-156b3eb229fe 00:30:21.807 [2024-11-20 13:50:21.214036] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:21.807 [2024-11-20 13:50:21.214043] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:21.807 [2024-11-20 13:50:21.214050] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:21.807 [2024-11-20 13:50:21.214058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:21.807 [2024-11-20 13:50:21.214064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:21.807 [2024-11-20 13:50:21.214073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:21.807 [2024-11-20 13:50:21.214087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:21.807 [2024-11-20 13:50:21.214093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:21.807 [2024-11-20 13:50:21.214099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:21.807 [2024-11-20 13:50:21.214106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.807 [2024-11-20 13:50:21.214114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:21.807 [2024-11-20 13:50:21.214122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:30:21.807 [2024-11-20 13:50:21.214129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.807 [2024-11-20 13:50:21.226580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.807 [2024-11-20 13:50:21.226617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:21.807 [2024-11-20 13:50:21.226629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.427 ms 00:30:21.807 [2024-11-20 13:50:21.226637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.807 [2024-11-20 13:50:21.227020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.807 [2024-11-20 13:50:21.227030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:21.807 [2024-11-20 13:50:21.227044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:30:21.807 [2024-11-20 13:50:21.227051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.259546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.259594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:22.065 [2024-11-20 13:50:21.259605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.259613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.259676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.259684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:22.065 [2024-11-20 13:50:21.259697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.259704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.259772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.259782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:22.065 [2024-11-20 13:50:21.259789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.259797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.259810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.259818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:22.065 [2024-11-20 13:50:21.259826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.259835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.336387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.336439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:22.065 [2024-11-20 13:50:21.336451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.336459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.398906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:22.065 [2024-11-20 13:50:21.399158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:22.065 [2024-11-20 13:50:21.399249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:22.065 [2024-11-20 13:50:21.399303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:22.065 [2024-11-20 13:50:21.399424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:22.065 [2024-11-20 13:50:21.399474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:22.065 [2024-11-20 13:50:21.399533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.065 [2024-11-20 13:50:21.399584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:22.065 [2024-11-20 13:50:21.399592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.065 [2024-11-20 13:50:21.399599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.065 [2024-11-20 13:50:21.399707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.796 ms, result 0 00:30:22.998 00:30:22.998 00:30:22.998 13:50:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:24.896 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:24.896 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:24.896 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:24.896 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:24.896 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:25.154 Process with pid 78620 is not found 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78620 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78620 ']' 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78620 00:30:25.154 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78620) - No such process 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78620 is not found' 00:30:25.154 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:25.421 Remove shared memory files 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:25.421 ************************************ 00:30:25.421 END TEST ftl_dirty_shutdown 00:30:25.421 ************************************ 00:30:25.421 00:30:25.421 real 2m21.640s 00:30:25.421 user 2m41.605s 00:30:25.421 sys 0m23.672s 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.421 13:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.421 13:50:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:25.421 13:50:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.421 13:50:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.421 13:50:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:25.421 ************************************ 00:30:25.421 START TEST ftl_upgrade_shutdown 00:30:25.421 ************************************ 00:30:25.421 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:25.682 * Looking for test storage... 00:30:25.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.682 --rc genhtml_branch_coverage=1 00:30:25.682 --rc genhtml_function_coverage=1 00:30:25.682 --rc genhtml_legend=1 00:30:25.682 --rc geninfo_all_blocks=1 00:30:25.682 --rc geninfo_unexecuted_blocks=1 00:30:25.682 00:30:25.682 ' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.682 --rc genhtml_branch_coverage=1 00:30:25.682 --rc genhtml_function_coverage=1 00:30:25.682 --rc genhtml_legend=1 00:30:25.682 --rc geninfo_all_blocks=1 00:30:25.682 --rc geninfo_unexecuted_blocks=1 00:30:25.682 00:30:25.682 ' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.682 --rc genhtml_branch_coverage=1 00:30:25.682 --rc genhtml_function_coverage=1 00:30:25.682 --rc genhtml_legend=1 00:30:25.682 --rc geninfo_all_blocks=1 00:30:25.682 --rc geninfo_unexecuted_blocks=1 00:30:25.682 00:30:25.682 ' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.682 --rc genhtml_branch_coverage=1 00:30:25.682 --rc genhtml_function_coverage=1 00:30:25.682 --rc genhtml_legend=1 00:30:25.682 --rc geninfo_all_blocks=1 00:30:25.682 --rc geninfo_unexecuted_blocks=1 00:30:25.682 00:30:25.682 ' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:25.682 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80214 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80214 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80214 ']' 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.683 13:50:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.683 [2024-11-20 13:50:25.071599] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:25.683 [2024-11-20 13:50:25.071901] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80214 ] 00:30:25.942 [2024-11-20 13:50:25.247897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.942 [2024-11-20 13:50:25.347802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:26.877 13:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:26.877 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:27.136 { 00:30:27.136 "name": "basen1", 00:30:27.136 "aliases": [ 00:30:27.136 "942c06a6-bcca-4383-94b0-40271ecb9824" 00:30:27.136 ], 00:30:27.136 "product_name": "NVMe disk", 00:30:27.136 "block_size": 4096, 00:30:27.136 "num_blocks": 1310720, 00:30:27.136 "uuid": "942c06a6-bcca-4383-94b0-40271ecb9824", 00:30:27.136 "numa_id": -1, 00:30:27.136 "assigned_rate_limits": { 00:30:27.136 "rw_ios_per_sec": 0, 00:30:27.136 "rw_mbytes_per_sec": 0, 00:30:27.136 "r_mbytes_per_sec": 0, 00:30:27.136 "w_mbytes_per_sec": 0 00:30:27.136 }, 00:30:27.136 "claimed": true, 00:30:27.136 "claim_type": "read_many_write_one", 00:30:27.136 "zoned": false, 00:30:27.136 "supported_io_types": { 00:30:27.136 "read": true, 00:30:27.136 "write": true, 00:30:27.136 "unmap": true, 00:30:27.136 "flush": true, 00:30:27.136 "reset": true, 00:30:27.136 "nvme_admin": true, 00:30:27.136 "nvme_io": true, 00:30:27.136 "nvme_io_md": false, 00:30:27.136 "write_zeroes": true, 00:30:27.136 "zcopy": false, 00:30:27.136 "get_zone_info": false, 00:30:27.136 "zone_management": false, 00:30:27.136 "zone_append": false, 00:30:27.136 "compare": true, 00:30:27.136 "compare_and_write": false, 00:30:27.136 "abort": true, 00:30:27.136 "seek_hole": false, 00:30:27.136 "seek_data": false, 00:30:27.136 "copy": true, 00:30:27.136 "nvme_iov_md": false 00:30:27.136 }, 00:30:27.136 "driver_specific": { 00:30:27.136 "nvme": [ 00:30:27.136 { 00:30:27.136 "pci_address": "0000:00:11.0", 00:30:27.136 "trid": { 00:30:27.136 "trtype": "PCIe", 00:30:27.136 "traddr": "0000:00:11.0" 00:30:27.136 }, 00:30:27.136 "ctrlr_data": { 00:30:27.136 "cntlid": 0, 00:30:27.136 "vendor_id": "0x1b36", 00:30:27.136 "model_number": "QEMU NVMe Ctrl", 00:30:27.136 "serial_number": "12341", 00:30:27.136 "firmware_revision": "8.0.0", 00:30:27.136 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:27.136 "oacs": { 00:30:27.136 "security": 0, 00:30:27.136 "format": 1, 00:30:27.136 "firmware": 0, 00:30:27.136 "ns_manage": 1 00:30:27.136 }, 00:30:27.136 "multi_ctrlr": false, 00:30:27.136 "ana_reporting": false 00:30:27.136 }, 00:30:27.136 "vs": { 00:30:27.136 "nvme_version": "1.4" 00:30:27.136 }, 00:30:27.136 "ns_data": { 00:30:27.136 "id": 1, 00:30:27.136 "can_share": false 00:30:27.136 } 00:30:27.136 } 00:30:27.136 ], 00:30:27.136 "mp_policy": "active_passive" 00:30:27.136 } 00:30:27.136 } 00:30:27.136 ]' 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:27.136 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:27.393 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ef86f9ef-044c-4e6c-830d-935c93e90f8b 00:30:27.393 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:27.393 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef86f9ef-044c-4e6c-830d-935c93e90f8b 00:30:27.651 13:50:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:27.651 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=84d43102-e07e-48dd-b9dd-407f6769a600 00:30:27.651 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 84d43102-e07e-48dd-b9dd-407f6769a600 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8b273a66-7d4c-48ac-9e7b-c81744404a26 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8b273a66-7d4c-48ac-9e7b-c81744404a26 ]] 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8b273a66-7d4c-48ac-9e7b-c81744404a26 5120 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8b273a66-7d4c-48ac-9e7b-c81744404a26 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8b273a66-7d4c-48ac-9e7b-c81744404a26 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8b273a66-7d4c-48ac-9e7b-c81744404a26 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:27.908 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b273a66-7d4c-48ac-9e7b-c81744404a26 00:30:28.166 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.166 { 00:30:28.166 "name": "8b273a66-7d4c-48ac-9e7b-c81744404a26", 00:30:28.166 "aliases": [ 00:30:28.166 "lvs/basen1p0" 00:30:28.166 ], 00:30:28.166 "product_name": "Logical Volume", 00:30:28.166 "block_size": 4096, 00:30:28.166 "num_blocks": 5242880, 00:30:28.166 "uuid": "8b273a66-7d4c-48ac-9e7b-c81744404a26", 00:30:28.166 "assigned_rate_limits": { 00:30:28.166 "rw_ios_per_sec": 0, 00:30:28.166 "rw_mbytes_per_sec": 0, 00:30:28.166 "r_mbytes_per_sec": 0, 00:30:28.167 "w_mbytes_per_sec": 0 00:30:28.167 }, 00:30:28.167 "claimed": false, 00:30:28.167 "zoned": false, 00:30:28.167 "supported_io_types": { 00:30:28.167 "read": true, 00:30:28.167 "write": true, 00:30:28.167 "unmap": true, 00:30:28.167 "flush": false, 00:30:28.167 "reset": true, 00:30:28.167 "nvme_admin": false, 00:30:28.167 "nvme_io": false, 00:30:28.167 "nvme_io_md": false, 00:30:28.167 "write_zeroes": true, 00:30:28.167 "zcopy": false, 00:30:28.167 "get_zone_info": false, 00:30:28.167 "zone_management": false, 00:30:28.167 "zone_append": false, 00:30:28.167 "compare": false, 00:30:28.167 "compare_and_write": false, 00:30:28.167 "abort": false, 00:30:28.167 "seek_hole": true, 00:30:28.167 "seek_data": true, 00:30:28.167 "copy": false, 00:30:28.167 "nvme_iov_md": false 00:30:28.167 }, 00:30:28.167 "driver_specific": { 00:30:28.167 "lvol": { 00:30:28.167 "lvol_store_uuid": "84d43102-e07e-48dd-b9dd-407f6769a600", 00:30:28.167 "base_bdev": "basen1", 00:30:28.167 "thin_provision": true, 00:30:28.167 "num_allocated_clusters": 0, 00:30:28.167 "snapshot": false, 00:30:28.167 "clone": false, 00:30:28.167 "esnap_clone": false 00:30:28.167 } 00:30:28.167 } 00:30:28.167 } 00:30:28.167 ]' 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:28.167 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:28.424 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:28.424 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:28.424 13:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:28.682 13:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:28.682 13:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:28.682 13:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8b273a66-7d4c-48ac-9e7b-c81744404a26 -c cachen1p0 --l2p_dram_limit 2 00:30:28.940 [2024-11-20 13:50:28.203643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.203697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:28.940 [2024-11-20 13:50:28.203713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:28.940 [2024-11-20 13:50:28.203722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.940 [2024-11-20 13:50:28.203778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.203787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:28.940 [2024-11-20 13:50:28.203797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:28.940 [2024-11-20 13:50:28.203805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.940 [2024-11-20 13:50:28.203825] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:28.940 [2024-11-20 13:50:28.204688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:28.940 [2024-11-20 13:50:28.204709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.204717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:28.940 [2024-11-20 13:50:28.204727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:30:28.940 [2024-11-20 13:50:28.204734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.940 [2024-11-20 13:50:28.204842] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 212776f4-5284-4563-ad62-8f7320aa9742 00:30:28.940 [2024-11-20 13:50:28.205920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.206095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:28.940 [2024-11-20 13:50:28.206112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:28.940 [2024-11-20 13:50:28.206122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.940 [2024-11-20 13:50:28.211432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.211469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:28.940 [2024-11-20 13:50:28.211479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.256 ms 00:30:28.940 [2024-11-20 13:50:28.211490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.940 [2024-11-20 13:50:28.211529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.940 [2024-11-20 13:50:28.211539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:28.941 [2024-11-20 13:50:28.211547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:28.941 [2024-11-20 13:50:28.211558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.211598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.941 [2024-11-20 13:50:28.211609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:28.941 [2024-11-20 13:50:28.211617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:28.941 [2024-11-20 13:50:28.211630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.211651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:28.941 [2024-11-20 13:50:28.215236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.941 [2024-11-20 13:50:28.215264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:28.941 [2024-11-20 13:50:28.215277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.588 ms 00:30:28.941 [2024-11-20 13:50:28.215284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.215310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.941 [2024-11-20 13:50:28.215318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:28.941 [2024-11-20 13:50:28.215327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.941 [2024-11-20 13:50:28.215334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.215358] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:28.941 [2024-11-20 13:50:28.215491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:28.941 [2024-11-20 13:50:28.215505] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:28.941 [2024-11-20 13:50:28.215516] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:28.941 [2024-11-20 13:50:28.215527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:28.941 [2024-11-20 13:50:28.215536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:28.941 [2024-11-20 13:50:28.215546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:28.941 [2024-11-20 13:50:28.215553] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:28.941 [2024-11-20 13:50:28.215563] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:28.941 [2024-11-20 13:50:28.215570] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:28.941 [2024-11-20 13:50:28.215579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.941 [2024-11-20 13:50:28.215586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:28.941 [2024-11-20 13:50:28.215595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:30:28.941 [2024-11-20 13:50:28.215602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.215685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.941 [2024-11-20 13:50:28.215693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:28.941 [2024-11-20 13:50:28.215703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:30:28.941 [2024-11-20 13:50:28.215716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.941 [2024-11-20 13:50:28.215830] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:28.941 [2024-11-20 13:50:28.215841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:28.941 [2024-11-20 13:50:28.215850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:28.941 [2024-11-20 13:50:28.215858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:28.941 [2024-11-20 13:50:28.215874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:28.941 [2024-11-20 13:50:28.215889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:28.941 [2024-11-20 13:50:28.215898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:28.941 [2024-11-20 13:50:28.215904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:28.941 [2024-11-20 13:50:28.215919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:28.941 [2024-11-20 13:50:28.215927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:28.941 [2024-11-20 13:50:28.215942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:28.941 [2024-11-20 13:50:28.215948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:28.941 [2024-11-20 13:50:28.215966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:28.941 [2024-11-20 13:50:28.215992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.215999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:28.941 [2024-11-20 13:50:28.216008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:28.941 [2024-11-20 13:50:28.216014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:28.941 [2024-11-20 13:50:28.216029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:28.941 [2024-11-20 13:50:28.216037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:28.941 [2024-11-20 13:50:28.216051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:28.941 [2024-11-20 13:50:28.216058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:28.941 [2024-11-20 13:50:28.216080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:28.941 [2024-11-20 13:50:28.216089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:28.941 [2024-11-20 13:50:28.216104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:28.941 [2024-11-20 13:50:28.216111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.216118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:28.941 [2024-11-20 13:50:28.216125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.216139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:28.941 [2024-11-20 13:50:28.216147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:28.941 [2024-11-20 13:50:28.216153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.216161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:28.941 [2024-11-20 13:50:28.216168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:28.941 [2024-11-20 13:50:28.216175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.941 [2024-11-20 13:50:28.216181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:28.941 [2024-11-20 13:50:28.216189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:28.941 [2024-11-20 13:50:28.216196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:28.941 [2024-11-20 13:50:28.216206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.942 [2024-11-20 13:50:28.216213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:28.942 [2024-11-20 13:50:28.216223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:28.942 [2024-11-20 13:50:28.216231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:28.942 [2024-11-20 13:50:28.216240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:28.942 [2024-11-20 13:50:28.216246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:28.942 [2024-11-20 13:50:28.216254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:28.942 [2024-11-20 13:50:28.216263] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:28.942 [2024-11-20 13:50:28.216274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:28.942 [2024-11-20 13:50:28.216292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:28.942 [2024-11-20 13:50:28.216314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:28.942 [2024-11-20 13:50:28.216322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:28.942 [2024-11-20 13:50:28.216329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:28.942 [2024-11-20 13:50:28.216338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:28.942 [2024-11-20 13:50:28.216394] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:28.942 [2024-11-20 13:50:28.216403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.942 [2024-11-20 13:50:28.216420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:28.942 [2024-11-20 13:50:28.216427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:28.942 [2024-11-20 13:50:28.216435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:28.942 [2024-11-20 13:50:28.216442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.942 [2024-11-20 13:50:28.216450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:28.942 [2024-11-20 13:50:28.216457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:30:28.942 [2024-11-20 13:50:28.216466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.942 [2024-11-20 13:50:28.216508] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:28.942 [2024-11-20 13:50:28.216521] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:31.528 [2024-11-20 13:50:30.639332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.639393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:31.528 [2024-11-20 13:50:30.639409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2422.813 ms 00:30:31.528 [2024-11-20 13:50:30.639420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.664133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.664341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:31.528 [2024-11-20 13:50:30.664360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.525 ms 00:30:31.528 [2024-11-20 13:50:30.664370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.664444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.664456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:31.528 [2024-11-20 13:50:30.664465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:31.528 [2024-11-20 13:50:30.664478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.694738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.694779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:31.528 [2024-11-20 13:50:30.694790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.219 ms 00:30:31.528 [2024-11-20 13:50:30.694799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.694834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.694847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:31.528 [2024-11-20 13:50:30.694854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:31.528 [2024-11-20 13:50:30.694863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.695226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.695245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:31.528 [2024-11-20 13:50:30.695254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:30:31.528 [2024-11-20 13:50:30.695263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.695308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.695318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:31.528 [2024-11-20 13:50:30.695328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:31.528 [2024-11-20 13:50:30.695338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.708960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.709003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:31.528 [2024-11-20 13:50:30.709014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.604 ms 00:30:31.528 [2024-11-20 13:50:30.709024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.737148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:31.528 [2024-11-20 13:50:30.737953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.738003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:31.528 [2024-11-20 13:50:30.738016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.855 ms 00:30:31.528 [2024-11-20 13:50:30.738025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.758617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.758654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:31.528 [2024-11-20 13:50:30.758668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.554 ms 00:30:31.528 [2024-11-20 13:50:30.758676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.758758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.758770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:31.528 [2024-11-20 13:50:30.758783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:31.528 [2024-11-20 13:50:30.758791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.780824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.780857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:31.528 [2024-11-20 13:50:30.780870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.988 ms 00:30:31.528 [2024-11-20 13:50:30.780879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.803313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.803347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:31.528 [2024-11-20 13:50:30.803359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.384 ms 00:30:31.528 [2024-11-20 13:50:30.803367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.803933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.803948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:31.528 [2024-11-20 13:50:30.803958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:30:31.528 [2024-11-20 13:50:30.803967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.869276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.869324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:31.528 [2024-11-20 13:50:30.869342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.262 ms 00:30:31.528 [2024-11-20 13:50:30.869350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.892988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.893037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:31.528 [2024-11-20 13:50:30.893058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.561 ms 00:30:31.528 [2024-11-20 13:50:30.893066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.915930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.915982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:31.528 [2024-11-20 13:50:30.915995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.821 ms 00:30:31.528 [2024-11-20 13:50:30.916003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.938168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.938320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:31.528 [2024-11-20 13:50:30.938342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.113 ms 00:30:31.528 [2024-11-20 13:50:30.938350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.938393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.938403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:31.528 [2024-11-20 13:50:30.938416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:31.528 [2024-11-20 13:50:30.938423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.938501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.528 [2024-11-20 13:50:30.938511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:31.528 [2024-11-20 13:50:30.938522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:31.528 [2024-11-20 13:50:30.938530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.528 [2024-11-20 13:50:30.939748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2735.693 ms, result 0 00:30:31.528 { 00:30:31.528 "name": "ftl", 00:30:31.528 "uuid": "212776f4-5284-4563-ad62-8f7320aa9742" 00:30:31.528 } 00:30:31.786 13:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:31.786 [2024-11-20 13:50:31.146808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.786 13:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:32.044 13:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:32.302 [2024-11-20 13:50:31.515220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:32.302 13:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:32.302 [2024-11-20 13:50:31.679542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:32.302 13:50:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:32.868 Fill FTL, iteration 1 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80326 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80326 /var/tmp/spdk.tgt.sock 00:30:32.868 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80326 ']' 00:30:32.869 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:32.869 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.869 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:32.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:32.869 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.869 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.869 [2024-11-20 13:50:32.092333] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:32.869 [2024-11-20 13:50:32.092568] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80326 ] 00:30:32.869 [2024-11-20 13:50:32.244524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.126 [2024-11-20 13:50:32.343755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.692 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.692 13:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:33.692 13:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:33.950 ftln1 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80326 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80326 ']' 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80326 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:33.950 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.951 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80326 00:30:34.209 killing process with pid 80326 00:30:34.209 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:34.209 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:34.209 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80326' 00:30:34.209 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80326 00:30:34.209 13:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80326 00:30:35.581 13:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:35.581 13:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:35.581 [2024-11-20 13:50:34.888897] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:35.581 [2024-11-20 13:50:34.889145] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80368 ] 00:30:35.839 [2024-11-20 13:50:35.043593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.839 [2024-11-20 13:50:35.141199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.256  [2024-11-20T13:50:37.617Z] Copying: 227/1024 [MB] (227 MBps) [2024-11-20T13:50:38.551Z] Copying: 452/1024 [MB] (225 MBps) [2024-11-20T13:50:39.924Z] Copying: 716/1024 [MB] (264 MBps) [2024-11-20T13:50:39.924Z] Copying: 978/1024 [MB] (262 MBps) [2024-11-20T13:50:40.490Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:30:41.063 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:41.063 Calculate MD5 checksum, iteration 1 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:41.063 13:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:41.063 [2024-11-20 13:50:40.350004] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:41.063 [2024-11-20 13:50:40.350400] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80426 ] 00:30:41.444 [2024-11-20 13:50:40.519693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.444 [2024-11-20 13:50:40.603925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.843  [2024-11-20T13:50:42.528Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-20T13:50:43.096Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:30:43.669 00:30:43.669 13:50:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:43.669 13:50:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:46.200 Fill FTL, iteration 2 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=99bc4d6074c49e23a51dda857386fcf3 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:46.200 13:50:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:46.200 [2024-11-20 13:50:45.194845] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:46.200 [2024-11-20 13:50:45.194960] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80483 ] 00:30:46.200 [2024-11-20 13:50:45.352037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.200 [2024-11-20 13:50:45.450937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.573  [2024-11-20T13:50:47.933Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-20T13:50:48.864Z] Copying: 451/1024 [MB] (227 MBps) [2024-11-20T13:50:50.234Z] Copying: 699/1024 [MB] (248 MBps) [2024-11-20T13:50:50.234Z] Copying: 964/1024 [MB] (265 MBps) [2024-11-20T13:50:50.799Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:30:51.372 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:51.372 Calculate MD5 checksum, iteration 2 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:51.372 13:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.372 [2024-11-20 13:50:50.707878] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:30:51.372 [2024-11-20 13:50:50.708186] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80541 ] 00:30:51.630 [2024-11-20 13:50:50.863830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.630 [2024-11-20 13:50:50.946881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.076  [2024-11-20T13:50:53.068Z] Copying: 690/1024 [MB] (690 MBps) [2024-11-20T13:50:54.006Z] Copying: 1024/1024 [MB] (average 685 MBps) 00:30:54.579 00:30:54.579 13:50:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:54.579 13:50:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.476 13:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:56.476 13:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=157ab70be030383b007ddc5ce31f3165 00:30:56.476 13:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:56.476 13:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:56.476 13:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:56.734 [2024-11-20 13:50:56.076558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.734 [2024-11-20 13:50:56.076767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:56.734 [2024-11-20 13:50:56.076788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:56.734 [2024-11-20 13:50:56.076797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.734 [2024-11-20 13:50:56.076827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.734 [2024-11-20 13:50:56.076836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:56.734 [2024-11-20 13:50:56.076848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:56.734 [2024-11-20 13:50:56.076856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.734 [2024-11-20 13:50:56.076875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.734 [2024-11-20 13:50:56.076883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:56.734 [2024-11-20 13:50:56.076891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:56.734 [2024-11-20 13:50:56.076899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.734 [2024-11-20 13:50:56.076985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.389 ms, result 0 00:30:56.734 true 00:30:56.734 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:56.991 { 00:30:56.991 "name": "ftl", 00:30:56.991 "properties": [ 00:30:56.991 { 00:30:56.991 "name": "superblock_version", 00:30:56.991 "value": 5, 00:30:56.991 "read-only": true 00:30:56.991 }, 00:30:56.991 { 00:30:56.991 "name": "base_device", 00:30:56.991 "bands": [ 00:30:56.991 { 00:30:56.991 "id": 0, 00:30:56.991 "state": "FREE", 00:30:56.991 "validity": 0.0 00:30:56.991 }, 00:30:56.991 { 00:30:56.991 "id": 1, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 2, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 3, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 4, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 5, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 6, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 7, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 8, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 9, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 10, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 11, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 12, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 13, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 14, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 15, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 16, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 17, 00:30:56.992 "state": "FREE", 00:30:56.992 "validity": 0.0 00:30:56.992 } 00:30:56.992 ], 00:30:56.992 "read-only": true 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "name": "cache_device", 00:30:56.992 "type": "bdev", 00:30:56.992 "chunks": [ 00:30:56.992 { 00:30:56.992 "id": 0, 00:30:56.992 "state": "INACTIVE", 00:30:56.992 "utilization": 0.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 1, 00:30:56.992 "state": "CLOSED", 00:30:56.992 "utilization": 1.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 2, 00:30:56.992 "state": "CLOSED", 00:30:56.992 "utilization": 1.0 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 3, 00:30:56.992 "state": "OPEN", 00:30:56.992 "utilization": 0.001953125 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "id": 4, 00:30:56.992 "state": "OPEN", 00:30:56.992 "utilization": 0.0 00:30:56.992 } 00:30:56.992 ], 00:30:56.992 "read-only": true 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "name": "verbose_mode", 00:30:56.992 "value": true, 00:30:56.992 "unit": "", 00:30:56.992 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:56.992 }, 00:30:56.992 { 00:30:56.992 "name": "prep_upgrade_on_shutdown", 00:30:56.992 "value": false, 00:30:56.992 "unit": "", 00:30:56.992 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:56.992 } 00:30:56.992 ] 00:30:56.992 } 00:30:56.992 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:57.250 [2024-11-20 13:50:56.480897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.250 [2024-11-20 13:50:56.481113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:57.250 [2024-11-20 13:50:56.481211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:57.250 [2024-11-20 13:50:56.481231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.250 [2024-11-20 13:50:56.481267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.250 [2024-11-20 13:50:56.481286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:57.250 [2024-11-20 13:50:56.481302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:57.250 [2024-11-20 13:50:56.481353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.250 [2024-11-20 13:50:56.481382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.250 [2024-11-20 13:50:56.481399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:57.250 [2024-11-20 13:50:56.481415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:57.250 [2024-11-20 13:50:56.481431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.250 [2024-11-20 13:50:56.481545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.632 ms, result 0 00:30:57.250 true 00:30:57.250 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:57.250 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:57.250 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:57.507 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:57.507 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:57.507 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:57.507 [2024-11-20 13:50:56.897261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.507 [2024-11-20 13:50:56.897301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:57.507 [2024-11-20 13:50:56.897311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:57.507 [2024-11-20 13:50:56.897317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.507 [2024-11-20 13:50:56.897336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.507 [2024-11-20 13:50:56.897342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:57.507 [2024-11-20 13:50:56.897348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:57.507 [2024-11-20 13:50:56.897354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.507 [2024-11-20 13:50:56.897369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.507 [2024-11-20 13:50:56.897375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:57.507 [2024-11-20 13:50:56.897381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:57.507 [2024-11-20 13:50:56.897387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.507 [2024-11-20 13:50:56.897432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.164 ms, result 0 00:30:57.507 true 00:30:57.507 13:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:57.778 { 00:30:57.778 "name": "ftl", 00:30:57.778 "properties": [ 00:30:57.778 { 00:30:57.778 "name": "superblock_version", 00:30:57.779 "value": 5, 00:30:57.779 "read-only": true 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "name": "base_device", 00:30:57.779 "bands": [ 00:30:57.779 { 00:30:57.779 "id": 0, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 1, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 2, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 3, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 4, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 5, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 6, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 7, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 8, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 9, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 10, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 11, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 12, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 13, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 14, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 15, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 16, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 17, 00:30:57.779 "state": "FREE", 00:30:57.779 "validity": 0.0 00:30:57.779 } 00:30:57.779 ], 00:30:57.779 "read-only": true 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "name": "cache_device", 00:30:57.779 "type": "bdev", 00:30:57.779 "chunks": [ 00:30:57.779 { 00:30:57.779 "id": 0, 00:30:57.779 "state": "INACTIVE", 00:30:57.779 "utilization": 0.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 1, 00:30:57.779 "state": "CLOSED", 00:30:57.779 "utilization": 1.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 2, 00:30:57.779 "state": "CLOSED", 00:30:57.779 "utilization": 1.0 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 3, 00:30:57.779 "state": "OPEN", 00:30:57.779 "utilization": 0.001953125 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "id": 4, 00:30:57.779 "state": "OPEN", 00:30:57.779 "utilization": 0.0 00:30:57.779 } 00:30:57.779 ], 00:30:57.779 "read-only": true 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "name": "verbose_mode", 00:30:57.779 "value": true, 00:30:57.779 "unit": "", 00:30:57.779 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:57.779 }, 00:30:57.779 { 00:30:57.779 "name": "prep_upgrade_on_shutdown", 00:30:57.779 "value": true, 00:30:57.779 "unit": "", 00:30:57.779 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:57.779 } 00:30:57.779 ] 00:30:57.779 } 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80214 ]] 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80214 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80214 ']' 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80214 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80214 00:30:57.779 killing process with pid 80214 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80214' 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80214 00:30:57.779 13:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80214 00:30:58.347 [2024-11-20 13:50:57.653263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:58.347 [2024-11-20 13:50:57.664286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.347 [2024-11-20 13:50:57.664326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:58.347 [2024-11-20 13:50:57.664338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:58.347 [2024-11-20 13:50:57.664345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.347 [2024-11-20 13:50:57.664363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:58.347 [2024-11-20 13:50:57.666520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.347 [2024-11-20 13:50:57.666541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:58.347 [2024-11-20 13:50:57.666549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.146 ms 00:30:58.347 [2024-11-20 13:50:57.666556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.509274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.509332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:08.316 [2024-11-20 13:51:06.509346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8842.657 ms 00:31:08.316 [2024-11-20 13:51:06.509358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.510597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.510614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:08.316 [2024-11-20 13:51:06.510623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.223 ms 00:31:08.316 [2024-11-20 13:51:06.510630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.511775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.511798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:08.316 [2024-11-20 13:51:06.511808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.118 ms 00:31:08.316 [2024-11-20 13:51:06.511819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.521206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.521239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:08.316 [2024-11-20 13:51:06.521249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.354 ms 00:31:08.316 [2024-11-20 13:51:06.521257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.527722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.527754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:08.316 [2024-11-20 13:51:06.527764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.430 ms 00:31:08.316 [2024-11-20 13:51:06.527773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.527856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.527865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:08.316 [2024-11-20 13:51:06.527878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:31:08.316 [2024-11-20 13:51:06.527885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.536755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.536784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:08.316 [2024-11-20 13:51:06.536794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.854 ms 00:31:08.316 [2024-11-20 13:51:06.536801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.545398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.316 [2024-11-20 13:51:06.545424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:08.316 [2024-11-20 13:51:06.545434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.565 ms 00:31:08.316 [2024-11-20 13:51:06.545441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.316 [2024-11-20 13:51:06.554257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.317 [2024-11-20 13:51:06.554285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:08.317 [2024-11-20 13:51:06.554294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.785 ms 00:31:08.317 [2024-11-20 13:51:06.554301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.317 [2024-11-20 13:51:06.562887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.317 [2024-11-20 13:51:06.562914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:08.317 [2024-11-20 13:51:06.562923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.526 ms 00:31:08.317 [2024-11-20 13:51:06.562930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.317 [2024-11-20 13:51:06.562961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:08.317 [2024-11-20 13:51:06.562984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.317 [2024-11-20 13:51:06.562995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.317 [2024-11-20 13:51:06.563010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:08.317 [2024-11-20 13:51:06.563019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.317 [2024-11-20 13:51:06.563131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:08.317 [2024-11-20 13:51:06.563139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 212776f4-5284-4563-ad62-8f7320aa9742 00:31:08.317 [2024-11-20 13:51:06.563147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:08.317 [2024-11-20 13:51:06.563154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:08.317 [2024-11-20 13:51:06.563161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:08.317 [2024-11-20 13:51:06.563168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:08.317 [2024-11-20 13:51:06.563175] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:08.317 [2024-11-20 13:51:06.563186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:08.317 [2024-11-20 13:51:06.563193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:08.317 [2024-11-20 13:51:06.563203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:08.317 [2024-11-20 13:51:06.563210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:08.317 [2024-11-20 13:51:06.563217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.317 [2024-11-20 13:51:06.563228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:08.317 [2024-11-20 13:51:06.563236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:31:08.317 [2024-11-20 13:51:06.563243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.317 [2024-11-20 13:51:06.575616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.317 [2024-11-20 13:51:06.575642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:08.317 [2024-11-20 13:51:06.575652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.357 ms 00:31:08.317 [2024-11-20 13:51:06.575664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.317 [2024-11-20 13:51:06.576006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.318 [2024-11-20 13:51:06.576015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:08.318 [2024-11-20 13:51:06.576024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:31:08.318 [2024-11-20 13:51:06.576031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.617645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.617689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:08.318 [2024-11-20 13:51:06.617704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.617712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.617751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.617759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:08.318 [2024-11-20 13:51:06.617767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.617774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.617851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.617861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:08.318 [2024-11-20 13:51:06.617869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.617880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.617896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.617903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:08.318 [2024-11-20 13:51:06.617911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.617917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.695229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.695287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:08.318 [2024-11-20 13:51:06.695303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.695311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:08.318 [2024-11-20 13:51:06.759175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.318 [2024-11-20 13:51:06.759292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.318 [2024-11-20 13:51:06.759360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.318 [2024-11-20 13:51:06.759470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:08.318 [2024-11-20 13:51:06.759524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.318 [2024-11-20 13:51:06.759580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.318 [2024-11-20 13:51:06.759642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.318 [2024-11-20 13:51:06.759654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.318 [2024-11-20 13:51:06.759666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.318 [2024-11-20 13:51:06.759784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9095.438 ms, result 0 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80741 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80741 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80741 ']' 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:14.924 13:51:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:14.924 [2024-11-20 13:51:13.217865] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:14.924 [2024-11-20 13:51:13.218006] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80741 ] 00:31:14.924 [2024-11-20 13:51:13.379382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.924 [2024-11-20 13:51:13.478700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.924 [2024-11-20 13:51:14.168573] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:14.924 [2024-11-20 13:51:14.168635] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:14.924 [2024-11-20 13:51:14.312850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.924 [2024-11-20 13:51:14.312901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:14.924 [2024-11-20 13:51:14.312914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:14.924 [2024-11-20 13:51:14.312922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.924 [2024-11-20 13:51:14.312996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.924 [2024-11-20 13:51:14.313008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:14.924 [2024-11-20 13:51:14.313016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:31:14.924 [2024-11-20 13:51:14.313024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.924 [2024-11-20 13:51:14.313046] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:14.924 [2024-11-20 13:51:14.313697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:14.924 [2024-11-20 13:51:14.313719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.924 [2024-11-20 13:51:14.313727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:14.924 [2024-11-20 13:51:14.313736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.679 ms 00:31:14.924 [2024-11-20 13:51:14.313743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.924 [2024-11-20 13:51:14.314776] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:14.924 [2024-11-20 13:51:14.326869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.924 [2024-11-20 13:51:14.326902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:14.924 [2024-11-20 13:51:14.326917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.094 ms 00:31:14.924 [2024-11-20 13:51:14.326926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.924 [2024-11-20 13:51:14.326985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.924 [2024-11-20 13:51:14.326995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:14.924 [2024-11-20 13:51:14.327003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:14.924 [2024-11-20 13:51:14.327010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.924 [2024-11-20 13:51:14.331594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.331624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:14.925 [2024-11-20 13:51:14.331633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.522 ms 00:31:14.925 [2024-11-20 13:51:14.331641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.331696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.331706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:14.925 [2024-11-20 13:51:14.331714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:14.925 [2024-11-20 13:51:14.331721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.331766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.331775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:14.925 [2024-11-20 13:51:14.331786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:14.925 [2024-11-20 13:51:14.331793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.331814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:14.925 [2024-11-20 13:51:14.335122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.335151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:14.925 [2024-11-20 13:51:14.335161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.314 ms 00:31:14.925 [2024-11-20 13:51:14.335172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.335198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.335206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:14.925 [2024-11-20 13:51:14.335214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:14.925 [2024-11-20 13:51:14.335221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.335242] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:14.925 [2024-11-20 13:51:14.335259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:14.925 [2024-11-20 13:51:14.335294] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:14.925 [2024-11-20 13:51:14.335309] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:14.925 [2024-11-20 13:51:14.335410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:14.925 [2024-11-20 13:51:14.335429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:14.925 [2024-11-20 13:51:14.335440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:14.925 [2024-11-20 13:51:14.335449] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335468] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:14.925 [2024-11-20 13:51:14.335475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:14.925 [2024-11-20 13:51:14.335483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:14.925 [2024-11-20 13:51:14.335490] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:14.925 [2024-11-20 13:51:14.335497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.335504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:14.925 [2024-11-20 13:51:14.335512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:31:14.925 [2024-11-20 13:51:14.335519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.335602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.925 [2024-11-20 13:51:14.335610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:14.925 [2024-11-20 13:51:14.335617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:14.925 [2024-11-20 13:51:14.335627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.925 [2024-11-20 13:51:14.335726] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:14.925 [2024-11-20 13:51:14.335742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:14.925 [2024-11-20 13:51:14.335750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:14.925 [2024-11-20 13:51:14.335772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:14.925 [2024-11-20 13:51:14.335786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:14.925 [2024-11-20 13:51:14.335793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:14.925 [2024-11-20 13:51:14.335799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:14.925 [2024-11-20 13:51:14.335812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:14.925 [2024-11-20 13:51:14.335819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:14.925 [2024-11-20 13:51:14.335832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:14.925 [2024-11-20 13:51:14.335838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:14.925 [2024-11-20 13:51:14.335850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:14.925 [2024-11-20 13:51:14.335857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:14.925 [2024-11-20 13:51:14.335874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:14.925 [2024-11-20 13:51:14.335880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:14.925 [2024-11-20 13:51:14.335893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:14.925 [2024-11-20 13:51:14.335899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:14.925 [2024-11-20 13:51:14.335918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:14.925 [2024-11-20 13:51:14.335924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:14.925 [2024-11-20 13:51:14.335936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:14.925 [2024-11-20 13:51:14.335943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:14.925 [2024-11-20 13:51:14.335955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:14.925 [2024-11-20 13:51:14.335961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.335980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:14.925 [2024-11-20 13:51:14.335988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:14.925 [2024-11-20 13:51:14.335994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.336000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:14.925 [2024-11-20 13:51:14.336007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:14.925 [2024-11-20 13:51:14.336014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.336020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:14.925 [2024-11-20 13:51:14.336026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:14.925 [2024-11-20 13:51:14.336033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.336039] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:14.925 [2024-11-20 13:51:14.336046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:14.925 [2024-11-20 13:51:14.336053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:14.925 [2024-11-20 13:51:14.336060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:14.925 [2024-11-20 13:51:14.336069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:14.925 [2024-11-20 13:51:14.336076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:14.925 [2024-11-20 13:51:14.336082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:14.925 [2024-11-20 13:51:14.336089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:14.925 [2024-11-20 13:51:14.336095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:14.925 [2024-11-20 13:51:14.336104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:14.925 [2024-11-20 13:51:14.336112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:14.925 [2024-11-20 13:51:14.336120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.925 [2024-11-20 13:51:14.336128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:14.925 [2024-11-20 13:51:14.336136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:14.925 [2024-11-20 13:51:14.336142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:14.925 [2024-11-20 13:51:14.336149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:14.926 [2024-11-20 13:51:14.336156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:14.926 [2024-11-20 13:51:14.336163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:14.926 [2024-11-20 13:51:14.336170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:14.926 [2024-11-20 13:51:14.336177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:14.926 [2024-11-20 13:51:14.336225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:14.926 [2024-11-20 13:51:14.336233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:14.926 [2024-11-20 13:51:14.336248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:14.926 [2024-11-20 13:51:14.336255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:14.926 [2024-11-20 13:51:14.336261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:14.926 [2024-11-20 13:51:14.336269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.926 [2024-11-20 13:51:14.336276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:14.926 [2024-11-20 13:51:14.336283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.611 ms 00:31:14.926 [2024-11-20 13:51:14.336289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.926 [2024-11-20 13:51:14.336341] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:14.926 [2024-11-20 13:51:14.336354] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:17.451 [2024-11-20 13:51:16.689453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.451 [2024-11-20 13:51:16.689514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:17.451 [2024-11-20 13:51:16.689532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2353.100 ms 00:31:17.451 [2024-11-20 13:51:16.689541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.451 [2024-11-20 13:51:16.719181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.451 [2024-11-20 13:51:16.719235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:17.451 [2024-11-20 13:51:16.719248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.439 ms 00:31:17.451 [2024-11-20 13:51:16.719256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.451 [2024-11-20 13:51:16.719357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.451 [2024-11-20 13:51:16.719373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:17.451 [2024-11-20 13:51:16.719385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:17.451 [2024-11-20 13:51:16.719393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.451 [2024-11-20 13:51:16.755368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.451 [2024-11-20 13:51:16.755418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:17.451 [2024-11-20 13:51:16.755430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.916 ms 00:31:17.451 [2024-11-20 13:51:16.755443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.451 [2024-11-20 13:51:16.755494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.451 [2024-11-20 13:51:16.755503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:17.451 [2024-11-20 13:51:16.755512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:17.451 [2024-11-20 13:51:16.755519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.755881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.755910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:17.452 [2024-11-20 13:51:16.755920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:31:17.452 [2024-11-20 13:51:16.755927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.755986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.755997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:17.452 [2024-11-20 13:51:16.756005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:17.452 [2024-11-20 13:51:16.756012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.771587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.771626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:17.452 [2024-11-20 13:51:16.771637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.552 ms 00:31:17.452 [2024-11-20 13:51:16.771649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.803612] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:17.452 [2024-11-20 13:51:16.803676] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:17.452 [2024-11-20 13:51:16.803696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.803710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:17.452 [2024-11-20 13:51:16.803726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.927 ms 00:31:17.452 [2024-11-20 13:51:16.803737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.820074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.820117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:17.452 [2024-11-20 13:51:16.820129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.291 ms 00:31:17.452 [2024-11-20 13:51:16.820141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.833276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.833313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:17.452 [2024-11-20 13:51:16.833323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.081 ms 00:31:17.452 [2024-11-20 13:51:16.833331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.846754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.846793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:17.452 [2024-11-20 13:51:16.846807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.397 ms 00:31:17.452 [2024-11-20 13:51:16.846815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.452 [2024-11-20 13:51:16.847502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.452 [2024-11-20 13:51:16.847532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:17.452 [2024-11-20 13:51:16.847542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:31:17.452 [2024-11-20 13:51:16.847549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.910124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.910198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:17.711 [2024-11-20 13:51:16.910216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.550 ms 00:31:17.711 [2024-11-20 13:51:16.910229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.922091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:17.711 [2024-11-20 13:51:16.922947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.923002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:17.711 [2024-11-20 13:51:16.923017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.639 ms 00:31:17.711 [2024-11-20 13:51:16.923028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.923142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.923191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:17.711 [2024-11-20 13:51:16.923205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:17.711 [2024-11-20 13:51:16.923217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.923310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.923329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:17.711 [2024-11-20 13:51:16.923341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:17.711 [2024-11-20 13:51:16.923351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.923387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.923398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:17.711 [2024-11-20 13:51:16.923413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:17.711 [2024-11-20 13:51:16.923424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.923463] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:17.711 [2024-11-20 13:51:16.923476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.923487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:17.711 [2024-11-20 13:51:16.923499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:17.711 [2024-11-20 13:51:16.923510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.950201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.950257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:17.711 [2024-11-20 13:51:16.950274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.658 ms 00:31:17.711 [2024-11-20 13:51:16.950286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.950403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.711 [2024-11-20 13:51:16.950416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:17.711 [2024-11-20 13:51:16.950429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:17.711 [2024-11-20 13:51:16.950441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.711 [2024-11-20 13:51:16.951696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2638.245 ms, result 0 00:31:17.711 [2024-11-20 13:51:16.966599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.711 [2024-11-20 13:51:16.982620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:17.711 [2024-11-20 13:51:16.990839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:18.276 [2024-11-20 13:51:17.663548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.276 [2024-11-20 13:51:17.663602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:18.276 [2024-11-20 13:51:17.663616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:18.276 [2024-11-20 13:51:17.663627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.276 [2024-11-20 13:51:17.663650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.276 [2024-11-20 13:51:17.663659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:18.276 [2024-11-20 13:51:17.663669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:18.276 [2024-11-20 13:51:17.663678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.276 [2024-11-20 13:51:17.663697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.276 [2024-11-20 13:51:17.663705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:18.276 [2024-11-20 13:51:17.663714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:18.276 [2024-11-20 13:51:17.663721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.276 [2024-11-20 13:51:17.663780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.222 ms, result 0 00:31:18.276 true 00:31:18.276 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:18.534 { 00:31:18.534 "name": "ftl", 00:31:18.534 "properties": [ 00:31:18.534 { 00:31:18.534 "name": "superblock_version", 00:31:18.534 "value": 5, 00:31:18.534 "read-only": true 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "name": "base_device", 00:31:18.534 "bands": [ 00:31:18.534 { 00:31:18.534 "id": 0, 00:31:18.534 "state": "CLOSED", 00:31:18.534 "validity": 1.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 1, 00:31:18.534 "state": "CLOSED", 00:31:18.534 "validity": 1.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 2, 00:31:18.534 "state": "CLOSED", 00:31:18.534 "validity": 0.007843137254901933 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 3, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 4, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 5, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 6, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 7, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 8, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 9, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 10, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 11, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 12, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 13, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 14, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 15, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 16, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 17, 00:31:18.534 "state": "FREE", 00:31:18.534 "validity": 0.0 00:31:18.534 } 00:31:18.534 ], 00:31:18.534 "read-only": true 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "name": "cache_device", 00:31:18.534 "type": "bdev", 00:31:18.534 "chunks": [ 00:31:18.534 { 00:31:18.534 "id": 0, 00:31:18.534 "state": "INACTIVE", 00:31:18.534 "utilization": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 1, 00:31:18.534 "state": "OPEN", 00:31:18.534 "utilization": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 2, 00:31:18.534 "state": "OPEN", 00:31:18.534 "utilization": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 3, 00:31:18.534 "state": "FREE", 00:31:18.534 "utilization": 0.0 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "id": 4, 00:31:18.534 "state": "FREE", 00:31:18.534 "utilization": 0.0 00:31:18.534 } 00:31:18.534 ], 00:31:18.534 "read-only": true 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "name": "verbose_mode", 00:31:18.534 "value": true, 00:31:18.534 "unit": "", 00:31:18.534 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:18.534 }, 00:31:18.534 { 00:31:18.534 "name": "prep_upgrade_on_shutdown", 00:31:18.534 "value": false, 00:31:18.534 "unit": "", 00:31:18.534 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:18.534 } 00:31:18.534 ] 00:31:18.534 } 00:31:18.534 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:18.534 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:18.534 13:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:18.793 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:18.793 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:18.793 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:18.793 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:18.793 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.050 Validate MD5 checksum, iteration 1 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:19.050 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:19.051 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:19.051 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:19.051 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:19.051 13:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:19.051 [2024-11-20 13:51:18.364257] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:19.051 [2024-11-20 13:51:18.364516] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80814 ] 00:31:19.308 [2024-11-20 13:51:18.523863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.308 [2024-11-20 13:51:18.624671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.211  [2024-11-20T13:51:20.900Z] Copying: 641/1024 [MB] (641 MBps) [2024-11-20T13:51:21.833Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:31:22.406 00:31:22.406 13:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:22.406 13:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=99bc4d6074c49e23a51dda857386fcf3 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 99bc4d6074c49e23a51dda857386fcf3 != \9\9\b\c\4\d\6\0\7\4\c\4\9\e\2\3\a\5\1\d\d\a\8\5\7\3\8\6\f\c\f\3 ]] 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:24.933 Validate MD5 checksum, iteration 2 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:24.933 13:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:24.933 [2024-11-20 13:51:23.988774] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:24.933 [2024-11-20 13:51:23.989066] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80877 ] 00:31:24.933 [2024-11-20 13:51:24.149128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.933 [2024-11-20 13:51:24.246849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.830  [2024-11-20T13:51:26.257Z] Copying: 712/1024 [MB] (712 MBps) [2024-11-20T13:51:27.188Z] Copying: 1024/1024 [MB] (average 698 MBps) 00:31:27.761 00:31:27.761 13:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:27.761 13:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=157ab70be030383b007ddc5ce31f3165 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 157ab70be030383b007ddc5ce31f3165 != \1\5\7\a\b\7\0\b\e\0\3\0\3\8\3\b\0\0\7\d\d\c\5\c\e\3\1\f\3\1\6\5 ]] 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80741 ]] 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80741 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:30.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80940 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80940 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80940 ']' 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.287 13:51:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:30.287 [2024-11-20 13:51:29.243233] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:30.287 [2024-11-20 13:51:29.243356] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80940 ] 00:31:30.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80741 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:30.287 [2024-11-20 13:51:29.399390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.287 [2024-11-20 13:51:29.482417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.855 [2024-11-20 13:51:30.073049] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:30.855 [2024-11-20 13:51:30.073103] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:30.855 [2024-11-20 13:51:30.216666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.216726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:30.855 [2024-11-20 13:51:30.216739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:30.855 [2024-11-20 13:51:30.216748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.216813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.216824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:30.855 [2024-11-20 13:51:30.216832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:30.855 [2024-11-20 13:51:30.216840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.216862] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:30.855 [2024-11-20 13:51:30.217624] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:30.855 [2024-11-20 13:51:30.217647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.217655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:30.855 [2024-11-20 13:51:30.217663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.790 ms 00:31:30.855 [2024-11-20 13:51:30.217671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.218042] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:30.855 [2024-11-20 13:51:30.234044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.234084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:30.855 [2024-11-20 13:51:30.234098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.001 ms 00:31:30.855 [2024-11-20 13:51:30.234106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.242883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.242915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:30.855 [2024-11-20 13:51:30.242928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:30.855 [2024-11-20 13:51:30.242936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.243297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.243316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:30.855 [2024-11-20 13:51:30.243325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:31:30.855 [2024-11-20 13:51:30.243333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.243383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.243398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:30.855 [2024-11-20 13:51:30.243406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:30.855 [2024-11-20 13:51:30.243413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.243439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.243447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:30.855 [2024-11-20 13:51:30.243455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:30.855 [2024-11-20 13:51:30.243462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.243484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:30.855 [2024-11-20 13:51:30.246612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.246640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:30.855 [2024-11-20 13:51:30.246650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.135 ms 00:31:30.855 [2024-11-20 13:51:30.246657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.246687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.246696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:30.855 [2024-11-20 13:51:30.246704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:30.855 [2024-11-20 13:51:30.246711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.246731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:30.855 [2024-11-20 13:51:30.246749] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:30.855 [2024-11-20 13:51:30.246783] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:30.855 [2024-11-20 13:51:30.246800] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:30.855 [2024-11-20 13:51:30.246901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:30.855 [2024-11-20 13:51:30.246912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:30.855 [2024-11-20 13:51:30.246922] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:30.855 [2024-11-20 13:51:30.246932] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:30.855 [2024-11-20 13:51:30.246940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:30.855 [2024-11-20 13:51:30.246949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:30.855 [2024-11-20 13:51:30.246956] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:30.855 [2024-11-20 13:51:30.246963] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:30.855 [2024-11-20 13:51:30.246980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:30.855 [2024-11-20 13:51:30.246988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.246997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:30.855 [2024-11-20 13:51:30.247005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:31:30.855 [2024-11-20 13:51:30.247012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.247098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.855 [2024-11-20 13:51:30.247107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:30.855 [2024-11-20 13:51:30.247114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:31:30.855 [2024-11-20 13:51:30.247121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.855 [2024-11-20 13:51:30.247237] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:30.855 [2024-11-20 13:51:30.247248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:30.855 [2024-11-20 13:51:30.247258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:30.855 [2024-11-20 13:51:30.247266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.855 [2024-11-20 13:51:30.247274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:30.855 [2024-11-20 13:51:30.247281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:30.855 [2024-11-20 13:51:30.247287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:30.855 [2024-11-20 13:51:30.247294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:30.855 [2024-11-20 13:51:30.247301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:30.855 [2024-11-20 13:51:30.247307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.855 [2024-11-20 13:51:30.247313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:30.855 [2024-11-20 13:51:30.247320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:30.855 [2024-11-20 13:51:30.247327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.855 [2024-11-20 13:51:30.247334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:30.856 [2024-11-20 13:51:30.247340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:30.856 [2024-11-20 13:51:30.247347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:30.856 [2024-11-20 13:51:30.247362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:30.856 [2024-11-20 13:51:30.247368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:30.856 [2024-11-20 13:51:30.247382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:30.856 [2024-11-20 13:51:30.247407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:30.856 [2024-11-20 13:51:30.247426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:30.856 [2024-11-20 13:51:30.247445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:30.856 [2024-11-20 13:51:30.247463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:30.856 [2024-11-20 13:51:30.247482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:30.856 [2024-11-20 13:51:30.247501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:30.856 [2024-11-20 13:51:30.247519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:30.856 [2024-11-20 13:51:30.247525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247532] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:30.856 [2024-11-20 13:51:30.247539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:30.856 [2024-11-20 13:51:30.247546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.856 [2024-11-20 13:51:30.247560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:30.856 [2024-11-20 13:51:30.247566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:30.856 [2024-11-20 13:51:30.247574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:30.856 [2024-11-20 13:51:30.247581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:30.856 [2024-11-20 13:51:30.247587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:30.856 [2024-11-20 13:51:30.247594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:30.856 [2024-11-20 13:51:30.247601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:30.856 [2024-11-20 13:51:30.247611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:30.856 [2024-11-20 13:51:30.247626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:30.856 [2024-11-20 13:51:30.247646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:30.856 [2024-11-20 13:51:30.247653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:30.856 [2024-11-20 13:51:30.247660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:30.856 [2024-11-20 13:51:30.247667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:30.856 [2024-11-20 13:51:30.247715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:30.856 [2024-11-20 13:51:30.247722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:30.856 [2024-11-20 13:51:30.247739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:30.856 [2024-11-20 13:51:30.247746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:30.856 [2024-11-20 13:51:30.247753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:30.856 [2024-11-20 13:51:30.247761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.856 [2024-11-20 13:51:30.247769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:30.856 [2024-11-20 13:51:30.247775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:31:30.856 [2024-11-20 13:51:30.247782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.856 [2024-11-20 13:51:30.271751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.856 [2024-11-20 13:51:30.271793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:30.856 [2024-11-20 13:51:30.271805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.920 ms 00:31:30.856 [2024-11-20 13:51:30.271813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.856 [2024-11-20 13:51:30.271859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.856 [2024-11-20 13:51:30.271867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:30.856 [2024-11-20 13:51:30.271875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:30.856 [2024-11-20 13:51:30.271882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.114 [2024-11-20 13:51:30.302460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.114 [2024-11-20 13:51:30.302506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:31.114 [2024-11-20 13:51:30.302518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.516 ms 00:31:31.114 [2024-11-20 13:51:30.302532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.114 [2024-11-20 13:51:30.302570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.114 [2024-11-20 13:51:30.302577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:31.114 [2024-11-20 13:51:30.302586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:31.114 [2024-11-20 13:51:30.302594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.114 [2024-11-20 13:51:30.302697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.114 [2024-11-20 13:51:30.302708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:31.114 [2024-11-20 13:51:30.302717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:31:31.114 [2024-11-20 13:51:30.302724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.302762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.302769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:31.115 [2024-11-20 13:51:30.302777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:31.115 [2024-11-20 13:51:30.302784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.316677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.316711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:31.115 [2024-11-20 13:51:30.316721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.873 ms 00:31:31.115 [2024-11-20 13:51:30.316728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.316845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.316856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:31.115 [2024-11-20 13:51:30.316865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:31.115 [2024-11-20 13:51:30.316872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.346753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.346796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:31.115 [2024-11-20 13:51:30.346810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.862 ms 00:31:31.115 [2024-11-20 13:51:30.346818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.356805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.356843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:31.115 [2024-11-20 13:51:30.356861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:31:31.115 [2024-11-20 13:51:30.356869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.411369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.411561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:31.115 [2024-11-20 13:51:30.411585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.430 ms 00:31:31.115 [2024-11-20 13:51:30.411594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.411728] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:31.115 [2024-11-20 13:51:30.411822] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:31.115 [2024-11-20 13:51:30.411915] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:31.115 [2024-11-20 13:51:30.412022] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:31.115 [2024-11-20 13:51:30.412032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.412040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:31.115 [2024-11-20 13:51:30.412049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:31:31.115 [2024-11-20 13:51:30.412057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.412127] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:31.115 [2024-11-20 13:51:30.412139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.412150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:31.115 [2024-11-20 13:51:30.412158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:31.115 [2024-11-20 13:51:30.412165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.427053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.427095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:31.115 [2024-11-20 13:51:30.427106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.865 ms 00:31:31.115 [2024-11-20 13:51:30.427114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.435998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.436028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:31.115 [2024-11-20 13:51:30.436038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:31.115 [2024-11-20 13:51:30.436045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.115 [2024-11-20 13:51:30.436128] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:31.115 [2024-11-20 13:51:30.436254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.115 [2024-11-20 13:51:30.436265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:31.115 [2024-11-20 13:51:30.436273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.127 ms 00:31:31.115 [2024-11-20 13:51:30.436280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.721 [2024-11-20 13:51:30.879024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.721 [2024-11-20 13:51:30.879084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:31.721 [2024-11-20 13:51:30.879097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 441.857 ms 00:31:31.721 [2024-11-20 13:51:30.879104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.721 [2024-11-20 13:51:30.882350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.721 [2024-11-20 13:51:30.882499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:31.721 [2024-11-20 13:51:30.882516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.686 ms 00:31:31.721 [2024-11-20 13:51:30.882523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.721 [2024-11-20 13:51:30.882796] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:31.721 [2024-11-20 13:51:30.882815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.721 [2024-11-20 13:51:30.882822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:31.721 [2024-11-20 13:51:30.882830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:31:31.721 [2024-11-20 13:51:30.882837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.721 [2024-11-20 13:51:30.882862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.721 [2024-11-20 13:51:30.882870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:31.721 [2024-11-20 13:51:30.882877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:31.721 [2024-11-20 13:51:30.882883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.721 [2024-11-20 13:51:30.882915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 446.787 ms, result 0 00:31:31.721 [2024-11-20 13:51:30.882945] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:31.721 [2024-11-20 13:51:30.883043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.721 [2024-11-20 13:51:30.883053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:31.721 [2024-11-20 13:51:30.883060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:31:31.721 [2024-11-20 13:51:30.883066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.304281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.304348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:31.980 [2024-11-20 13:51:31.304362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 420.394 ms 00:31:31.980 [2024-11-20 13:51:31.304371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.308269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.308418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:31.980 [2024-11-20 13:51:31.308435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:31:31.980 [2024-11-20 13:51:31.308443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.308726] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:31.980 [2024-11-20 13:51:31.308752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.308761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:31.980 [2024-11-20 13:51:31.308770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:31:31.980 [2024-11-20 13:51:31.308777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.308795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.308803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:31.980 [2024-11-20 13:51:31.308810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:31.980 [2024-11-20 13:51:31.308817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.308851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 425.898 ms, result 0 00:31:31.980 [2024-11-20 13:51:31.308890] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:31.980 [2024-11-20 13:51:31.308900] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:31.980 [2024-11-20 13:51:31.308909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.308918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:31.980 [2024-11-20 13:51:31.308925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 872.796 ms 00:31:31.980 [2024-11-20 13:51:31.308932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.308980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.308990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:31.980 [2024-11-20 13:51:31.309001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:31.980 [2024-11-20 13:51:31.309009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.319603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:31.980 [2024-11-20 13:51:31.319812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.319827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:31.980 [2024-11-20 13:51:31.319836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.788 ms 00:31:31.980 [2024-11-20 13:51:31.319843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.320560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.320578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:31.980 [2024-11-20 13:51:31.320591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.634 ms 00:31:31.980 [2024-11-20 13:51:31.320598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.322833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.322941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:31.980 [2024-11-20 13:51:31.322954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:31:31.980 [2024-11-20 13:51:31.322961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.323014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.323024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:31.980 [2024-11-20 13:51:31.323032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:31.980 [2024-11-20 13:51:31.323043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.323145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.323154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:31.980 [2024-11-20 13:51:31.323162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:31.980 [2024-11-20 13:51:31.323169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.323188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.323196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:31.980 [2024-11-20 13:51:31.323203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:31.980 [2024-11-20 13:51:31.323211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.323238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:31.980 [2024-11-20 13:51:31.323246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.323254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:31.980 [2024-11-20 13:51:31.323261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:31.980 [2024-11-20 13:51:31.323269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.323319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.980 [2024-11-20 13:51:31.323328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:31.980 [2024-11-20 13:51:31.323335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:31.980 [2024-11-20 13:51:31.323342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.980 [2024-11-20 13:51:31.324200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1107.127 ms, result 0 00:31:31.980 [2024-11-20 13:51:31.336590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.980 [2024-11-20 13:51:31.352592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:31.980 [2024-11-20 13:51:31.360710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:32.548 Validate MD5 checksum, iteration 1 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:32.548 13:51:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:32.548 [2024-11-20 13:51:31.863620] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:32.548 [2024-11-20 13:51:31.864297] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80975 ] 00:31:32.807 [2024-11-20 13:51:32.023710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.807 [2024-11-20 13:51:32.122835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.705  [2024-11-20T13:51:34.132Z] Copying: 738/1024 [MB] (738 MBps) [2024-11-20T13:51:38.311Z] Copying: 1024/1024 [MB] (average 769 MBps) 00:31:38.884 00:31:38.884 13:51:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:38.884 13:51:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=99bc4d6074c49e23a51dda857386fcf3 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 99bc4d6074c49e23a51dda857386fcf3 != \9\9\b\c\4\d\6\0\7\4\c\4\9\e\2\3\a\5\1\d\d\a\8\5\7\3\8\6\f\c\f\3 ]] 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:40.782 Validate MD5 checksum, iteration 2 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:40.782 13:51:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:41.040 [2024-11-20 13:51:40.209111] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:41.040 [2024-11-20 13:51:40.209231] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81065 ] 00:31:41.040 [2024-11-20 13:51:40.368439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.297 [2024-11-20 13:51:40.469403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.669  [2024-11-20T13:51:42.661Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-20T13:51:43.595Z] Copying: 1024/1024 [MB] (average 727 MBps) 00:31:44.168 00:31:44.168 13:51:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:44.168 13:51:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.067 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=157ab70be030383b007ddc5ce31f3165 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 157ab70be030383b007ddc5ce31f3165 != \1\5\7\a\b\7\0\b\e\0\3\0\3\8\3\b\0\0\7\d\d\c\5\c\e\3\1\f\3\1\6\5 ]] 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:46.068 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80940 ]] 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80940 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80940 ']' 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80940 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80940 00:31:46.326 killing process with pid 80940 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80940' 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80940 00:31:46.326 13:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80940 00:31:46.893 [2024-11-20 13:51:46.187634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:46.893 [2024-11-20 13:51:46.200282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.893 [2024-11-20 13:51:46.200323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:46.893 [2024-11-20 13:51:46.200335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:46.894 [2024-11-20 13:51:46.200341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.200359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:46.894 [2024-11-20 13:51:46.202466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.202492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:46.894 [2024-11-20 13:51:46.202504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.096 ms 00:31:46.894 [2024-11-20 13:51:46.202511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.202676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.202684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:46.894 [2024-11-20 13:51:46.202690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:31:46.894 [2024-11-20 13:51:46.202696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.203723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.203746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:46.894 [2024-11-20 13:51:46.203754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.017 ms 00:31:46.894 [2024-11-20 13:51:46.203761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.204662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.204681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:46.894 [2024-11-20 13:51:46.204690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.877 ms 00:31:46.894 [2024-11-20 13:51:46.204697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.211912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.211943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:46.894 [2024-11-20 13:51:46.211951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.179 ms 00:31:46.894 [2024-11-20 13:51:46.211962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.216236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.216267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:46.894 [2024-11-20 13:51:46.216277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.238 ms 00:31:46.894 [2024-11-20 13:51:46.216283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.216344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.216352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:46.894 [2024-11-20 13:51:46.216359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:46.894 [2024-11-20 13:51:46.216365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.223507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.223537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:46.894 [2024-11-20 13:51:46.223545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.125 ms 00:31:46.894 [2024-11-20 13:51:46.223551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.230802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.230828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:46.894 [2024-11-20 13:51:46.230835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.225 ms 00:31:46.894 [2024-11-20 13:51:46.230841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.237982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.238008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:46.894 [2024-11-20 13:51:46.238015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.096 ms 00:31:46.894 [2024-11-20 13:51:46.238022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.244890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.244916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:46.894 [2024-11-20 13:51:46.244923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.821 ms 00:31:46.894 [2024-11-20 13:51:46.244929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.244957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:46.894 [2024-11-20 13:51:46.244978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:46.894 [2024-11-20 13:51:46.244986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:46.894 [2024-11-20 13:51:46.244993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:46.894 [2024-11-20 13:51:46.244999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:46.894 [2024-11-20 13:51:46.245089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:46.894 [2024-11-20 13:51:46.245095] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 212776f4-5284-4563-ad62-8f7320aa9742 00:31:46.894 [2024-11-20 13:51:46.245101] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:46.894 [2024-11-20 13:51:46.245107] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:46.894 [2024-11-20 13:51:46.245112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:46.894 [2024-11-20 13:51:46.245118] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:46.894 [2024-11-20 13:51:46.245124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:46.894 [2024-11-20 13:51:46.245129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:46.894 [2024-11-20 13:51:46.245134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:46.894 [2024-11-20 13:51:46.245139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:46.894 [2024-11-20 13:51:46.245144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:46.894 [2024-11-20 13:51:46.245149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.245159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:46.894 [2024-11-20 13:51:46.245166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:31:46.894 [2024-11-20 13:51:46.245171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.254838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.254863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:46.894 [2024-11-20 13:51:46.254872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.654 ms 00:31:46.894 [2024-11-20 13:51:46.254879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.894 [2024-11-20 13:51:46.255163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.894 [2024-11-20 13:51:46.255170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:46.894 [2024-11-20 13:51:46.255177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:31:46.894 [2024-11-20 13:51:46.255182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.895 [2024-11-20 13:51:46.288575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.895 [2024-11-20 13:51:46.288612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:46.895 [2024-11-20 13:51:46.288622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.895 [2024-11-20 13:51:46.288628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.895 [2024-11-20 13:51:46.288664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.895 [2024-11-20 13:51:46.288671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:46.895 [2024-11-20 13:51:46.288678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.895 [2024-11-20 13:51:46.288683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.895 [2024-11-20 13:51:46.288761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.895 [2024-11-20 13:51:46.288769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:46.895 [2024-11-20 13:51:46.288775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.895 [2024-11-20 13:51:46.288781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.895 [2024-11-20 13:51:46.288794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.895 [2024-11-20 13:51:46.288803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:46.895 [2024-11-20 13:51:46.288808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.895 [2024-11-20 13:51:46.288814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.348782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.348820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:47.153 [2024-11-20 13:51:46.348830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.348836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.397625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.397665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:47.153 [2024-11-20 13:51:46.397675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.397681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.397742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.397750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:47.153 [2024-11-20 13:51:46.397756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.397762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.397805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.397812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:47.153 [2024-11-20 13:51:46.397823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.397834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.397905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.397913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:47.153 [2024-11-20 13:51:46.397919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.397925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.397948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.397955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:47.153 [2024-11-20 13:51:46.397961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.397978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.398006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.398013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:47.153 [2024-11-20 13:51:46.398019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.398025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.398058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.153 [2024-11-20 13:51:46.398065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:47.153 [2024-11-20 13:51:46.398073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.153 [2024-11-20 13:51:46.398079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.153 [2024-11-20 13:51:46.398170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.867 ms, result 0 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:47.773 Remove shared memory files 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80741 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:47.773 00:31:47.773 real 1m22.240s 00:31:47.773 user 1m53.463s 00:31:47.773 sys 0m18.134s 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.773 13:51:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:47.773 ************************************ 00:31:47.773 END TEST ftl_upgrade_shutdown 00:31:47.773 ************************************ 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@14 -- # killprocess 75262 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@954 -- # '[' -z 75262 ']' 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@958 -- # kill -0 75262 00:31:47.773 Process with pid 75262 is not found 00:31:47.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75262) - No such process 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75262 is not found' 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81176 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81176 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@835 -- # '[' -z 81176 ']' 00:31:47.773 13:51:47 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.773 13:51:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:48.063 [2024-11-20 13:51:47.185144] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:31:48.063 [2024-11-20 13:51:47.185270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81176 ] 00:31:48.063 [2024-11-20 13:51:47.346809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.063 [2024-11-20 13:51:47.446681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.629 13:51:48 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.629 13:51:48 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:48.629 13:51:48 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:48.886 nvme0n1 00:31:48.886 13:51:48 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:48.886 13:51:48 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:48.886 13:51:48 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.145 13:51:48 ftl -- ftl/common.sh@28 -- # stores=84d43102-e07e-48dd-b9dd-407f6769a600 00:31:49.145 13:51:48 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:49.145 13:51:48 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84d43102-e07e-48dd-b9dd-407f6769a600 00:31:49.403 13:51:48 ftl -- ftl/ftl.sh@23 -- # killprocess 81176 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 81176 ']' 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@958 -- # kill -0 81176 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@959 -- # uname 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81176 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:49.403 killing process with pid 81176 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81176' 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@973 -- # kill 81176 00:31:49.403 13:51:48 ftl -- common/autotest_common.sh@978 -- # wait 81176 00:31:51.300 13:51:50 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:51.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:51.300 Waiting for block devices as requested 00:31:51.300 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:51.300 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:51.300 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:51.300 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:56.567 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:56.567 Remove shared memory files 00:31:56.567 13:51:55 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:56.567 13:51:55 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:56.567 13:51:55 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:56.567 13:51:55 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:56.567 13:51:55 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:56.567 13:51:55 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:56.567 13:51:55 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:56.567 00:31:56.567 real 8m33.452s 00:31:56.567 user 10m40.676s 00:31:56.567 sys 1m19.240s 00:31:56.567 13:51:55 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.567 ************************************ 00:31:56.567 13:51:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:56.567 END TEST ftl 00:31:56.567 ************************************ 00:31:56.567 13:51:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:56.567 13:51:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:56.567 13:51:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:56.567 13:51:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:56.567 13:51:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:56.567 13:51:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:56.567 13:51:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:56.567 13:51:55 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:56.567 13:51:55 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:56.567 13:51:55 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:56.567 13:51:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.567 13:51:55 -- common/autotest_common.sh@10 -- # set +x 00:31:56.567 13:51:55 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:56.567 13:51:55 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:56.567 13:51:55 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:56.567 13:51:55 -- common/autotest_common.sh@10 -- # set +x 00:31:57.500 INFO: APP EXITING 00:31:57.500 INFO: killing all VMs 00:31:57.500 INFO: killing vhost app 00:31:57.500 INFO: EXIT DONE 00:31:57.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:58.323 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:58.323 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:58.323 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:58.323 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:58.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:58.838 Cleaning 00:31:58.838 Removing: /var/run/dpdk/spdk0/config 00:31:58.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:58.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:58.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:58.839 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:58.839 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:58.839 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:58.839 Removing: /var/run/dpdk/spdk0 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57035 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57253 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57471 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57570 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57615 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57743 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57761 00:31:58.839 Removing: /var/run/dpdk/spdk_pid57960 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58064 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58160 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58271 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58368 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58413 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58450 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58520 00:31:58.839 Removing: /var/run/dpdk/spdk_pid58637 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59084 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59143 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59206 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59222 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59324 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59340 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59442 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59458 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59511 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59529 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59582 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59600 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59749 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59791 00:31:58.839 Removing: /var/run/dpdk/spdk_pid59875 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60052 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60131 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60167 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60604 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60704 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60813 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60866 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60893 00:31:58.839 Removing: /var/run/dpdk/spdk_pid60971 00:31:58.839 Removing: /var/run/dpdk/spdk_pid61590 00:31:58.839 Removing: /var/run/dpdk/spdk_pid61626 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62113 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62211 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62320 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62373 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62399 00:31:58.839 Removing: /var/run/dpdk/spdk_pid62424 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64273 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64410 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64420 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64432 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64472 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64476 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64488 00:31:58.839 Removing: /var/run/dpdk/spdk_pid64535 00:31:59.097 Removing: /var/run/dpdk/spdk_pid64539 00:31:59.097 Removing: /var/run/dpdk/spdk_pid64551 00:31:59.097 Removing: /var/run/dpdk/spdk_pid64596 00:31:59.097 Removing: /var/run/dpdk/spdk_pid64600 00:31:59.097 Removing: /var/run/dpdk/spdk_pid64612 00:31:59.097 Removing: /var/run/dpdk/spdk_pid66002 00:31:59.097 Removing: /var/run/dpdk/spdk_pid66106 00:31:59.097 Removing: /var/run/dpdk/spdk_pid67512 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69252 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69321 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69396 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69502 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69594 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69694 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69764 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69839 00:31:59.097 Removing: /var/run/dpdk/spdk_pid69949 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70046 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70137 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70206 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70281 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70391 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70487 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70582 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70656 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70736 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70840 00:31:59.097 Removing: /var/run/dpdk/spdk_pid70933 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71029 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71092 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71167 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71247 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71320 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71420 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71517 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71607 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71676 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71750 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71830 00:31:59.097 Removing: /var/run/dpdk/spdk_pid71904 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72007 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72098 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72242 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72525 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72557 00:31:59.097 Removing: /var/run/dpdk/spdk_pid72997 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73189 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73293 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73404 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73451 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73471 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73792 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73848 00:31:59.097 Removing: /var/run/dpdk/spdk_pid73926 00:31:59.097 Removing: /var/run/dpdk/spdk_pid74310 00:31:59.097 Removing: /var/run/dpdk/spdk_pid74456 00:31:59.097 Removing: /var/run/dpdk/spdk_pid75262 00:31:59.097 Removing: /var/run/dpdk/spdk_pid75394 00:31:59.097 Removing: /var/run/dpdk/spdk_pid75585 00:31:59.097 Removing: /var/run/dpdk/spdk_pid75677 00:31:59.097 Removing: /var/run/dpdk/spdk_pid75974 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76211 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76547 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76729 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76816 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76870 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76963 00:31:59.097 Removing: /var/run/dpdk/spdk_pid76988 00:31:59.097 Removing: /var/run/dpdk/spdk_pid77041 00:31:59.097 Removing: /var/run/dpdk/spdk_pid77198 00:31:59.098 Removing: /var/run/dpdk/spdk_pid77423 00:31:59.098 Removing: /var/run/dpdk/spdk_pid77698 00:31:59.098 Removing: /var/run/dpdk/spdk_pid78012 00:31:59.098 Removing: /var/run/dpdk/spdk_pid78280 00:31:59.098 Removing: /var/run/dpdk/spdk_pid78620 00:31:59.098 Removing: /var/run/dpdk/spdk_pid78740 00:31:59.098 Removing: /var/run/dpdk/spdk_pid78827 00:31:59.098 Removing: /var/run/dpdk/spdk_pid79234 00:31:59.098 Removing: /var/run/dpdk/spdk_pid79298 00:31:59.098 Removing: /var/run/dpdk/spdk_pid79588 00:31:59.098 Removing: /var/run/dpdk/spdk_pid79859 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80214 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80326 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80368 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80426 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80483 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80541 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80741 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80814 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80877 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80940 00:31:59.098 Removing: /var/run/dpdk/spdk_pid80975 00:31:59.098 Removing: /var/run/dpdk/spdk_pid81065 00:31:59.098 Removing: /var/run/dpdk/spdk_pid81176 00:31:59.098 Clean 00:31:59.098 13:51:58 -- common/autotest_common.sh@1453 -- # return 0 00:31:59.098 13:51:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:59.098 13:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.098 13:51:58 -- common/autotest_common.sh@10 -- # set +x 00:31:59.355 13:51:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:59.355 13:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.355 13:51:58 -- common/autotest_common.sh@10 -- # set +x 00:31:59.355 13:51:58 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:59.355 13:51:58 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:59.355 13:51:58 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:59.355 13:51:58 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:59.355 13:51:58 -- spdk/autotest.sh@398 -- # hostname 00:31:59.355 13:51:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:59.355 geninfo: WARNING: invalid characters removed from testname! 00:32:25.891 13:52:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:25.891 13:52:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:27.792 13:52:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:29.202 13:52:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:31.831 13:52:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:33.733 13:52:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:35.635 13:52:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:35.636 13:52:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:35.636 13:52:35 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:35.636 13:52:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:35.636 13:52:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:35.636 13:52:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:35.894 + [[ -n 5032 ]] 00:32:35.894 + sudo kill 5032 00:32:35.902 [Pipeline] } 00:32:35.920 [Pipeline] // timeout 00:32:35.925 [Pipeline] } 00:32:35.941 [Pipeline] // stage 00:32:35.947 [Pipeline] } 00:32:35.962 [Pipeline] // catchError 00:32:35.973 [Pipeline] stage 00:32:35.976 [Pipeline] { (Stop VM) 00:32:35.989 [Pipeline] sh 00:32:36.267 + vagrant halt 00:32:38.798 ==> default: Halting domain... 00:32:42.117 [Pipeline] sh 00:32:42.398 + vagrant destroy -f 00:32:44.935 ==> default: Removing domain... 00:32:45.523 [Pipeline] sh 00:32:45.808 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:32:45.819 [Pipeline] } 00:32:45.834 [Pipeline] // stage 00:32:45.840 [Pipeline] } 00:32:45.855 [Pipeline] // dir 00:32:45.861 [Pipeline] } 00:32:45.875 [Pipeline] // wrap 00:32:45.882 [Pipeline] } 00:32:45.896 [Pipeline] // catchError 00:32:45.905 [Pipeline] stage 00:32:45.908 [Pipeline] { (Epilogue) 00:32:45.921 [Pipeline] sh 00:32:46.204 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:52.776 [Pipeline] catchError 00:32:52.778 [Pipeline] { 00:32:52.791 [Pipeline] sh 00:32:53.072 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:53.072 Artifacts sizes are good 00:32:53.080 [Pipeline] } 00:32:53.094 [Pipeline] // catchError 00:32:53.105 [Pipeline] archiveArtifacts 00:32:53.113 Archiving artifacts 00:32:53.290 [Pipeline] cleanWs 00:32:53.301 [WS-CLEANUP] Deleting project workspace... 00:32:53.301 [WS-CLEANUP] Deferred wipeout is used... 00:32:53.306 [WS-CLEANUP] done 00:32:53.308 [Pipeline] } 00:32:53.324 [Pipeline] // stage 00:32:53.330 [Pipeline] } 00:32:53.344 [Pipeline] // node 00:32:53.350 [Pipeline] End of Pipeline 00:32:53.389 Finished: SUCCESS