00:00:00.001 Started by upstream project "autotest-per-patch" build number 132416 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.068 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.103 Using shallow fetch with depth 1 00:00:00.103 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.103 > git --version # timeout=10 00:00:00.144 > git --version # 'git version 2.39.2' 00:00:00.144 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.196 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.196 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.791 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.806 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.819 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.819 > git config core.sparsecheckout # timeout=10 00:00:03.832 > git read-tree -mu HEAD # timeout=10 00:00:03.854 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.880 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.880 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.986 [Pipeline] Start of Pipeline 00:00:04.004 [Pipeline] library 00:00:04.006 Loading library shm_lib@master 00:00:07.672 Library shm_lib@master is cached. Copying from home. 00:00:07.759 [Pipeline] node 00:00:22.809 Still waiting to schedule task 00:00:22.810 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:02.179 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:02:02.181 [Pipeline] { 00:02:02.193 [Pipeline] catchError 00:02:02.194 [Pipeline] { 00:02:02.208 [Pipeline] wrap 00:02:02.217 [Pipeline] { 00:02:02.225 [Pipeline] stage 00:02:02.227 [Pipeline] { (Prologue) 00:02:02.241 [Pipeline] echo 00:02:02.244 Node: VM-host-SM38 00:02:02.250 [Pipeline] cleanWs 00:02:02.276 [WS-CLEANUP] Deleting project workspace... 00:02:02.276 [WS-CLEANUP] Deferred wipeout is used... 00:02:02.284 [WS-CLEANUP] done 00:02:02.473 [Pipeline] setCustomBuildProperty 00:02:02.567 [Pipeline] httpRequest 00:02:02.886 [Pipeline] echo 00:02:02.887 Sorcerer 10.211.164.20 is alive 00:02:02.897 [Pipeline] retry 00:02:02.898 [Pipeline] { 00:02:02.908 [Pipeline] httpRequest 00:02:02.912 HttpMethod: GET 00:02:02.913 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:02.913 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:02.914 Response Code: HTTP/1.1 200 OK 00:02:02.914 Success: Status code 200 is in the accepted range: 200,404 00:02:02.914 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:03.060 [Pipeline] } 00:02:03.076 [Pipeline] // retry 00:02:03.083 [Pipeline] sh 00:02:03.368 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:03.383 [Pipeline] httpRequest 00:02:03.686 [Pipeline] echo 00:02:03.688 Sorcerer 10.211.164.20 is alive 00:02:03.698 [Pipeline] retry 00:02:03.700 [Pipeline] { 00:02:03.715 [Pipeline] httpRequest 00:02:03.720 HttpMethod: GET 00:02:03.721 URL: http://10.211.164.20/packages/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:02:03.722 Sending request to url: http://10.211.164.20/packages/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:02:03.723 Response Code: HTTP/1.1 200 OK 00:02:03.723 Success: Status code 200 is in the accepted range: 200,404 00:02:03.724 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:02:05.996 [Pipeline] } 00:02:06.017 [Pipeline] // retry 00:02:06.026 [Pipeline] sh 00:02:06.311 + tar --no-same-owner -xf spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:02:09.648 [Pipeline] sh 00:02:09.930 + git -C spdk log --oneline -n5 00:02:09.930 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:02:09.930 349af566b nvmf: Get metadata config by not bdev but bdev_desc 00:02:09.930 1981e6eec bdevperf: Add hide_metadata option 00:02:09.930 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:02:09.930 25916e30c bdevperf: Store the result of DIF type check into job structure 00:02:09.949 [Pipeline] writeFile 00:02:09.965 [Pipeline] sh 00:02:10.297 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:10.309 [Pipeline] sh 00:02:10.591 + cat autorun-spdk.conf 00:02:10.591 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.591 SPDK_TEST_NVME=1 00:02:10.591 SPDK_TEST_FTL=1 00:02:10.591 SPDK_TEST_ISAL=1 00:02:10.591 SPDK_RUN_ASAN=1 00:02:10.591 SPDK_RUN_UBSAN=1 00:02:10.591 SPDK_TEST_XNVME=1 00:02:10.591 SPDK_TEST_NVME_FDP=1 00:02:10.591 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.598 RUN_NIGHTLY=0 00:02:10.600 [Pipeline] } 00:02:10.615 [Pipeline] // stage 00:02:10.630 [Pipeline] stage 00:02:10.632 [Pipeline] { (Run VM) 00:02:10.645 [Pipeline] sh 00:02:10.931 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:10.931 + echo 'Start stage prepare_nvme.sh' 00:02:10.931 Start stage prepare_nvme.sh 00:02:10.931 + [[ -n 10 ]] 00:02:10.931 + disk_prefix=ex10 00:02:10.931 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:02:10.931 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:02:10.931 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:02:10.931 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.931 ++ SPDK_TEST_NVME=1 00:02:10.931 ++ SPDK_TEST_FTL=1 00:02:10.931 ++ SPDK_TEST_ISAL=1 00:02:10.931 ++ SPDK_RUN_ASAN=1 00:02:10.931 ++ SPDK_RUN_UBSAN=1 00:02:10.931 ++ SPDK_TEST_XNVME=1 00:02:10.931 ++ SPDK_TEST_NVME_FDP=1 00:02:10.931 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.931 ++ RUN_NIGHTLY=0 00:02:10.931 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:02:10.931 + nvme_files=() 00:02:10.931 + declare -A nvme_files 00:02:10.931 + backend_dir=/var/lib/libvirt/images/backends 00:02:10.931 + nvme_files['nvme.img']=5G 00:02:10.931 + nvme_files['nvme-cmb.img']=5G 00:02:10.931 + nvme_files['nvme-multi0.img']=4G 00:02:10.931 + nvme_files['nvme-multi1.img']=4G 00:02:10.931 + nvme_files['nvme-multi2.img']=4G 00:02:10.931 + nvme_files['nvme-openstack.img']=8G 00:02:10.931 + nvme_files['nvme-zns.img']=5G 00:02:10.931 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:10.931 + (( SPDK_TEST_FTL == 1 )) 00:02:10.931 + nvme_files["nvme-ftl.img"]=6G 00:02:10.931 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:10.931 + nvme_files["nvme-fdp.img"]=1G 00:02:10.931 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:02:10.931 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.931 + for nvme in "${!nvme_files[@]}" 00:02:10.931 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:02:11.193 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.193 + for nvme in "${!nvme_files[@]}" 00:02:11.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:02:11.193 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:11.193 + for nvme in "${!nvme_files[@]}" 00:02:11.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:02:11.193 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.194 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:02:11.194 + echo 'End stage prepare_nvme.sh' 00:02:11.194 End stage prepare_nvme.sh 00:02:11.206 [Pipeline] sh 00:02:11.489 + DISTRO=fedora39 00:02:11.489 + CPUS=10 00:02:11.489 + RAM=12288 00:02:11.489 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.489 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.489 00:02:11.489 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:02:11.489 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:02:11.489 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:02:11.489 HELP=0 00:02:11.489 DRY_RUN=0 00:02:11.489 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:02:11.489 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.489 NVME_AUTO_CREATE=0 00:02:11.489 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:02:11.489 NVME_CMB=,,,, 00:02:11.489 NVME_PMR=,,,, 00:02:11.489 NVME_ZNS=,,,, 00:02:11.489 NVME_MS=true,,,, 00:02:11.489 NVME_FDP=,,,on, 00:02:11.489 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.489 SPDK_VAGRANT_VMCPU=10 00:02:11.489 SPDK_VAGRANT_VMRAM=12288 00:02:11.489 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.489 SPDK_VAGRANT_HTTP_PROXY= 00:02:11.489 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.489 SPDK_OPENSTACK_NETWORK=0 00:02:11.489 VAGRANT_PACKAGE_BOX=0 00:02:11.489 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:11.489 FORCE_DISTRO=true 00:02:11.489 VAGRANT_BOX_VERSION= 00:02:11.489 EXTRA_VAGRANTFILES= 00:02:11.489 NIC_MODEL=e1000 00:02:11.489 00:02:11.489 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:02:11.489 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:02:14.016 Bringing machine 'default' up with 'libvirt' provider... 00:02:14.278 ==> default: Creating image (snapshot of base box volume). 00:02:14.278 ==> default: Creating domain with the following settings... 00:02:14.278 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732117812_77f08ff15913c50a4c2a 00:02:14.278 ==> default: -- Domain type: kvm 00:02:14.278 ==> default: -- Cpus: 10 00:02:14.278 ==> default: -- Feature: acpi 00:02:14.278 ==> default: -- Feature: apic 00:02:14.278 ==> default: -- Feature: pae 00:02:14.278 ==> default: -- Memory: 12288M 00:02:14.278 ==> default: -- Memory Backing: hugepages: 00:02:14.278 ==> default: -- Management MAC: 00:02:14.278 ==> default: -- Loader: 00:02:14.278 ==> default: -- Nvram: 00:02:14.278 ==> default: -- Base box: spdk/fedora39 00:02:14.278 ==> default: -- Storage pool: default 00:02:14.278 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732117812_77f08ff15913c50a4c2a.img (20G) 00:02:14.278 ==> default: -- Volume Cache: default 00:02:14.278 ==> default: -- Kernel: 00:02:14.278 ==> default: -- Initrd: 00:02:14.278 ==> default: -- Graphics Type: vnc 00:02:14.278 ==> default: -- Graphics Port: -1 00:02:14.278 ==> default: -- Graphics IP: 127.0.0.1 00:02:14.278 ==> default: -- Graphics Password: Not defined 00:02:14.278 ==> default: -- Video Type: cirrus 00:02:14.278 ==> default: -- Video VRAM: 9216 00:02:14.278 ==> default: -- Sound Type: 00:02:14.278 ==> default: -- Keymap: en-us 00:02:14.278 ==> default: -- TPM Path: 00:02:14.279 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:14.279 ==> default: -- Command line args: 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:14.279 ==> default: -> value=-drive, 00:02:14.279 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:14.279 ==> default: -> value=-device, 00:02:14.279 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.548 ==> default: Creating shared folders metadata... 00:02:14.548 ==> default: Starting domain. 00:02:15.481 ==> default: Waiting for domain to get an IP address... 00:02:30.350 ==> default: Waiting for SSH to become available... 00:02:30.350 ==> default: Configuring and enabling network interfaces... 00:02:33.628 default: SSH address: 192.168.121.6:22 00:02:33.628 default: SSH username: vagrant 00:02:33.628 default: SSH auth method: private key 00:02:35.520 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:42.071 ==> default: Mounting SSHFS shared folder... 00:02:43.034 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:43.034 ==> default: Checking Mount.. 00:02:43.966 ==> default: Folder Successfully Mounted! 00:02:43.966 00:02:43.966 SUCCESS! 00:02:43.966 00:02:43.966 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:43.966 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:43.966 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:43.966 00:02:43.974 [Pipeline] } 00:02:43.990 [Pipeline] // stage 00:02:43.999 [Pipeline] dir 00:02:44.000 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:02:44.001 [Pipeline] { 00:02:44.015 [Pipeline] catchError 00:02:44.017 [Pipeline] { 00:02:44.031 [Pipeline] sh 00:02:44.310 + vagrant ssh-config --host vagrant 00:02:44.310 + sed -ne '/^Host/,$p' 00:02:44.310 + tee ssh_conf 00:02:46.860 Host vagrant 00:02:46.860 HostName 192.168.121.6 00:02:46.860 User vagrant 00:02:46.860 Port 22 00:02:46.860 UserKnownHostsFile /dev/null 00:02:46.860 StrictHostKeyChecking no 00:02:46.860 PasswordAuthentication no 00:02:46.860 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:46.860 IdentitiesOnly yes 00:02:46.860 LogLevel FATAL 00:02:46.860 ForwardAgent yes 00:02:46.860 ForwardX11 yes 00:02:46.860 00:02:46.870 [Pipeline] withEnv 00:02:46.871 [Pipeline] { 00:02:46.879 [Pipeline] sh 00:02:47.152 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:47.152 source /etc/os-release 00:02:47.152 [[ -e /image.version ]] && img=$(< /image.version) 00:02:47.152 # Minimal, systemd-like check. 00:02:47.152 if [[ -e /.dockerenv ]]; then 00:02:47.152 # Clear garbage from the node'\''s name: 00:02:47.152 # agt-er_autotest_547-896 -> autotest_547-896 00:02:47.152 # $HOSTNAME is the actual container id 00:02:47.152 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:47.152 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:47.152 # We can assume this is a mount from a host where container is running, 00:02:47.152 # so fetch its hostname to easily identify the target swarm worker. 00:02:47.152 container="$(< /etc/hostname) ($agent)" 00:02:47.152 else 00:02:47.152 # Fallback 00:02:47.152 container=$agent 00:02:47.152 fi 00:02:47.152 fi 00:02:47.152 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:47.152 ' 00:02:47.161 [Pipeline] } 00:02:47.175 [Pipeline] // withEnv 00:02:47.181 [Pipeline] setCustomBuildProperty 00:02:47.193 [Pipeline] stage 00:02:47.195 [Pipeline] { (Tests) 00:02:47.210 [Pipeline] sh 00:02:47.484 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:47.495 [Pipeline] sh 00:02:47.772 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:47.785 [Pipeline] timeout 00:02:47.785 Timeout set to expire in 50 min 00:02:47.787 [Pipeline] { 00:02:47.802 [Pipeline] sh 00:02:48.080 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:48.337 HEAD is now at 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:02:48.348 [Pipeline] sh 00:02:48.624 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:48.636 [Pipeline] sh 00:02:48.925 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:48.940 [Pipeline] sh 00:02:49.221 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:49.221 ++ readlink -f spdk_repo 00:02:49.221 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:49.221 + [[ -n /home/vagrant/spdk_repo ]] 00:02:49.221 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:49.221 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:49.221 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:49.221 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:49.221 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:49.221 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:49.221 + cd /home/vagrant/spdk_repo 00:02:49.221 + source /etc/os-release 00:02:49.221 ++ NAME='Fedora Linux' 00:02:49.221 ++ VERSION='39 (Cloud Edition)' 00:02:49.221 ++ ID=fedora 00:02:49.221 ++ VERSION_ID=39 00:02:49.221 ++ VERSION_CODENAME= 00:02:49.221 ++ PLATFORM_ID=platform:f39 00:02:49.221 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:49.221 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:49.221 ++ LOGO=fedora-logo-icon 00:02:49.221 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:49.221 ++ HOME_URL=https://fedoraproject.org/ 00:02:49.221 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:49.221 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:49.221 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:49.221 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:49.221 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:49.221 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:49.221 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:49.221 ++ SUPPORT_END=2024-11-12 00:02:49.221 ++ VARIANT='Cloud Edition' 00:02:49.221 ++ VARIANT_ID=cloud 00:02:49.221 + uname -a 00:02:49.221 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:49.221 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:49.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:49.786 Hugepages 00:02:49.786 node hugesize free / total 00:02:49.786 node0 1048576kB 0 / 0 00:02:49.786 node0 2048kB 0 / 0 00:02:49.786 00:02:49.786 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.786 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:49.786 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:50.044 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:50.044 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:50.044 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:50.044 + rm -f /tmp/spdk-ld-path 00:02:50.044 + source autorun-spdk.conf 00:02:50.044 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.044 ++ SPDK_TEST_NVME=1 00:02:50.044 ++ SPDK_TEST_FTL=1 00:02:50.044 ++ SPDK_TEST_ISAL=1 00:02:50.044 ++ SPDK_RUN_ASAN=1 00:02:50.044 ++ SPDK_RUN_UBSAN=1 00:02:50.044 ++ SPDK_TEST_XNVME=1 00:02:50.044 ++ SPDK_TEST_NVME_FDP=1 00:02:50.044 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.044 ++ RUN_NIGHTLY=0 00:02:50.044 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:50.044 + [[ -n '' ]] 00:02:50.044 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:50.044 + for M in /var/spdk/build-*-manifest.txt 00:02:50.044 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:50.044 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.044 + for M in /var/spdk/build-*-manifest.txt 00:02:50.044 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:50.044 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.044 + for M in /var/spdk/build-*-manifest.txt 00:02:50.044 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:50.044 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.044 ++ uname 00:02:50.044 + [[ Linux == \L\i\n\u\x ]] 00:02:50.044 + sudo dmesg -T 00:02:50.044 + sudo dmesg --clear 00:02:50.044 + dmesg_pid=5028 00:02:50.044 + [[ Fedora Linux == FreeBSD ]] 00:02:50.044 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.044 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.044 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:50.044 + sudo dmesg -Tw 00:02:50.044 + [[ -x /usr/src/fio-static/fio ]] 00:02:50.044 + export FIO_BIN=/usr/src/fio-static/fio 00:02:50.044 + FIO_BIN=/usr/src/fio-static/fio 00:02:50.044 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:50.044 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:50.044 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:50.044 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.044 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.044 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:50.044 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.044 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.044 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.044 15:50:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.044 15:50:48 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.044 15:50:48 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:50.044 15:50:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:50.045 15:50:48 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.045 15:50:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.045 15:50:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:50.045 15:50:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:50.045 15:50:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:50.045 15:50:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.045 15:50:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.045 15:50:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.045 15:50:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.045 15:50:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.045 15:50:48 -- paths/export.sh@5 -- $ export PATH 00:02:50.045 15:50:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.045 15:50:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:50.045 15:50:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:50.045 15:50:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732117848.XXXXXX 00:02:50.045 15:50:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732117848.uP1AJQ 00:02:50.045 15:50:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:50.045 15:50:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:50.045 15:50:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:50.045 15:50:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:50.045 15:50:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:50.045 15:50:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:50.045 15:50:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:50.045 15:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.045 15:50:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:50.045 15:50:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:50.045 15:50:48 -- pm/common@17 -- $ local monitor 00:02:50.045 15:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.045 15:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.045 15:50:48 -- pm/common@25 -- $ sleep 1 00:02:50.045 15:50:48 -- pm/common@21 -- $ date +%s 00:02:50.302 15:50:48 -- pm/common@21 -- $ date +%s 00:02:50.302 15:50:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732117848 00:02:50.302 15:50:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732117848 00:02:50.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732117848_collect-cpu-load.pm.log 00:02:50.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732117848_collect-vmstat.pm.log 00:02:51.236 15:50:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:51.236 15:50:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:51.236 15:50:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:51.236 15:50:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.236 15:50:49 -- spdk/autobuild.sh@16 -- $ date -u 00:02:51.236 Wed Nov 20 03:50:49 PM UTC 2024 00:02:51.236 15:50:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:51.236 v25.01-pre-241-g0728de5b0 00:02:51.236 15:50:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:51.236 15:50:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:51.236 15:50:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.236 15:50:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.236 15:50:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.236 ************************************ 00:02:51.236 START TEST asan 00:02:51.236 ************************************ 00:02:51.236 using asan 00:02:51.236 15:50:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:51.236 00:02:51.236 real 0m0.000s 00:02:51.236 user 0m0.000s 00:02:51.236 sys 0m0.000s 00:02:51.236 15:50:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.236 ************************************ 00:02:51.236 15:50:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.236 END TEST asan 00:02:51.236 ************************************ 00:02:51.236 15:50:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:51.236 15:50:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:51.236 15:50:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.236 15:50:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.236 15:50:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.236 ************************************ 00:02:51.236 START TEST ubsan 00:02:51.236 ************************************ 00:02:51.236 using ubsan 00:02:51.236 15:50:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:51.236 00:02:51.236 real 0m0.000s 00:02:51.236 user 0m0.000s 00:02:51.236 sys 0m0.000s 00:02:51.236 ************************************ 00:02:51.236 END TEST ubsan 00:02:51.236 ************************************ 00:02:51.236 15:50:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.236 15:50:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.236 15:50:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:51.236 15:50:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.236 15:50:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.236 15:50:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:51.236 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.236 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:51.800 Using 'verbs' RDMA provider 00:03:02.322 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:12.341 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:12.341 Creating mk/config.mk...done. 00:03:12.341 Creating mk/cc.flags.mk...done. 00:03:12.341 Type 'make' to build. 00:03:12.341 15:51:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:12.341 15:51:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:12.341 15:51:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:12.341 15:51:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.341 ************************************ 00:03:12.341 START TEST make 00:03:12.341 ************************************ 00:03:12.341 15:51:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:12.341 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:12.341 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:12.341 meson setup builddir \ 00:03:12.341 -Dwith-libaio=enabled \ 00:03:12.341 -Dwith-liburing=enabled \ 00:03:12.341 -Dwith-libvfn=disabled \ 00:03:12.341 -Dwith-spdk=disabled \ 00:03:12.341 -Dexamples=false \ 00:03:12.341 -Dtests=false \ 00:03:12.341 -Dtools=false && \ 00:03:12.341 meson compile -C builddir && \ 00:03:12.341 cd -) 00:03:12.341 make[1]: Nothing to be done for 'all'. 00:03:14.235 The Meson build system 00:03:14.235 Version: 1.5.0 00:03:14.235 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:14.235 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:14.235 Build type: native build 00:03:14.235 Project name: xnvme 00:03:14.235 Project version: 0.7.5 00:03:14.235 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:14.235 C linker for the host machine: cc ld.bfd 2.40-14 00:03:14.235 Host machine cpu family: x86_64 00:03:14.235 Host machine cpu: x86_64 00:03:14.235 Message: host_machine.system: linux 00:03:14.235 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:14.235 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:14.235 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:14.235 Run-time dependency threads found: YES 00:03:14.235 Has header "setupapi.h" : NO 00:03:14.235 Has header "linux/blkzoned.h" : YES 00:03:14.235 Has header "linux/blkzoned.h" : YES (cached) 00:03:14.235 Has header "libaio.h" : YES 00:03:14.235 Library aio found: YES 00:03:14.235 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.235 Run-time dependency liburing found: YES 2.2 00:03:14.235 Dependency libvfn skipped: feature with-libvfn disabled 00:03:14.235 Found CMake: /usr/bin/cmake (3.27.7) 00:03:14.235 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:14.235 Subproject spdk : skipped: feature with-spdk disabled 00:03:14.235 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.235 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.235 Library rt found: YES 00:03:14.235 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:14.235 Configuring xnvme_config.h using configuration 00:03:14.235 Configuring xnvme.spec using configuration 00:03:14.235 Run-time dependency bash-completion found: YES 2.11 00:03:14.235 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:14.235 Program cp found: YES (/usr/bin/cp) 00:03:14.235 Build targets in project: 3 00:03:14.235 00:03:14.235 xnvme 0.7.5 00:03:14.235 00:03:14.235 Subprojects 00:03:14.235 spdk : NO Feature 'with-spdk' disabled 00:03:14.235 00:03:14.235 User defined options 00:03:14.235 examples : false 00:03:14.235 tests : false 00:03:14.235 tools : false 00:03:14.235 with-libaio : enabled 00:03:14.235 with-liburing: enabled 00:03:14.235 with-libvfn : disabled 00:03:14.235 with-spdk : disabled 00:03:14.235 00:03:14.235 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.492 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:14.492 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:14.492 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:14.492 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:14.492 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:14.492 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:14.492 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:14.492 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:14.492 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:14.492 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:14.492 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:14.492 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:14.492 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:14.858 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:14.858 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:14.858 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:14.858 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:14.858 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:14.858 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:14.858 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:14.858 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:14.858 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:14.858 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:14.858 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:14.858 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:14.858 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:14.858 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:14.858 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:14.858 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:14.858 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:14.858 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:14.858 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:14.858 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:14.858 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:14.858 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:14.858 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:14.858 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:14.858 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:14.858 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:14.858 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:14.858 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:14.858 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:14.858 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:14.858 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:14.859 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:14.859 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:14.859 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:14.859 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:14.859 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:14.859 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:14.859 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:14.859 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:14.859 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:14.859 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:15.131 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:15.131 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:15.131 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:15.131 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:15.131 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:15.131 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:15.131 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:15.131 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:15.131 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:15.131 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:15.131 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:15.131 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:15.131 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:15.131 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:15.131 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:15.131 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:15.131 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:15.131 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:15.131 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:15.388 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:15.645 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:15.645 [75/76] Linking static target lib/libxnvme.a 00:03:15.645 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:15.645 INFO: autodetecting backend as ninja 00:03:15.645 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.645 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:22.198 The Meson build system 00:03:22.198 Version: 1.5.0 00:03:22.198 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:22.198 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:22.198 Build type: native build 00:03:22.198 Program cat found: YES (/usr/bin/cat) 00:03:22.198 Project name: DPDK 00:03:22.198 Project version: 24.03.0 00:03:22.198 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:22.198 C linker for the host machine: cc ld.bfd 2.40-14 00:03:22.198 Host machine cpu family: x86_64 00:03:22.198 Host machine cpu: x86_64 00:03:22.198 Message: ## Building in Developer Mode ## 00:03:22.198 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:22.198 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:22.198 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:22.198 Program python3 found: YES (/usr/bin/python3) 00:03:22.198 Program cat found: YES (/usr/bin/cat) 00:03:22.198 Compiler for C supports arguments -march=native: YES 00:03:22.198 Checking for size of "void *" : 8 00:03:22.198 Checking for size of "void *" : 8 (cached) 00:03:22.198 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:22.198 Library m found: YES 00:03:22.198 Library numa found: YES 00:03:22.198 Has header "numaif.h" : YES 00:03:22.198 Library fdt found: NO 00:03:22.198 Library execinfo found: NO 00:03:22.198 Has header "execinfo.h" : YES 00:03:22.198 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:22.198 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:22.198 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:22.198 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:22.198 Run-time dependency openssl found: YES 3.1.1 00:03:22.198 Run-time dependency libpcap found: YES 1.10.4 00:03:22.198 Has header "pcap.h" with dependency libpcap: YES 00:03:22.198 Compiler for C supports arguments -Wcast-qual: YES 00:03:22.198 Compiler for C supports arguments -Wdeprecated: YES 00:03:22.198 Compiler for C supports arguments -Wformat: YES 00:03:22.198 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:22.198 Compiler for C supports arguments -Wformat-security: NO 00:03:22.198 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:22.198 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:22.198 Compiler for C supports arguments -Wnested-externs: YES 00:03:22.198 Compiler for C supports arguments -Wold-style-definition: YES 00:03:22.198 Compiler for C supports arguments -Wpointer-arith: YES 00:03:22.198 Compiler for C supports arguments -Wsign-compare: YES 00:03:22.199 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:22.199 Compiler for C supports arguments -Wundef: YES 00:03:22.199 Compiler for C supports arguments -Wwrite-strings: YES 00:03:22.199 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:22.199 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:22.199 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:22.199 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:22.199 Program objdump found: YES (/usr/bin/objdump) 00:03:22.199 Compiler for C supports arguments -mavx512f: YES 00:03:22.199 Checking if "AVX512 checking" compiles: YES 00:03:22.199 Fetching value of define "__SSE4_2__" : 1 00:03:22.199 Fetching value of define "__AES__" : 1 00:03:22.199 Fetching value of define "__AVX__" : 1 00:03:22.199 Fetching value of define "__AVX2__" : 1 00:03:22.199 Fetching value of define "__AVX512BW__" : 1 00:03:22.199 Fetching value of define "__AVX512CD__" : 1 00:03:22.199 Fetching value of define "__AVX512DQ__" : 1 00:03:22.199 Fetching value of define "__AVX512F__" : 1 00:03:22.199 Fetching value of define "__AVX512VL__" : 1 00:03:22.199 Fetching value of define "__PCLMUL__" : 1 00:03:22.199 Fetching value of define "__RDRND__" : 1 00:03:22.199 Fetching value of define "__RDSEED__" : 1 00:03:22.199 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:22.199 Fetching value of define "__znver1__" : (undefined) 00:03:22.199 Fetching value of define "__znver2__" : (undefined) 00:03:22.199 Fetching value of define "__znver3__" : (undefined) 00:03:22.199 Fetching value of define "__znver4__" : (undefined) 00:03:22.199 Library asan found: YES 00:03:22.199 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:22.199 Message: lib/log: Defining dependency "log" 00:03:22.199 Message: lib/kvargs: Defining dependency "kvargs" 00:03:22.199 Message: lib/telemetry: Defining dependency "telemetry" 00:03:22.199 Library rt found: YES 00:03:22.199 Checking for function "getentropy" : NO 00:03:22.199 Message: lib/eal: Defining dependency "eal" 00:03:22.199 Message: lib/ring: Defining dependency "ring" 00:03:22.199 Message: lib/rcu: Defining dependency "rcu" 00:03:22.199 Message: lib/mempool: Defining dependency "mempool" 00:03:22.199 Message: lib/mbuf: Defining dependency "mbuf" 00:03:22.199 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:22.199 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:22.199 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:22.199 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:22.199 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:22.199 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:22.199 Compiler for C supports arguments -mpclmul: YES 00:03:22.199 Compiler for C supports arguments -maes: YES 00:03:22.199 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:22.199 Compiler for C supports arguments -mavx512bw: YES 00:03:22.199 Compiler for C supports arguments -mavx512dq: YES 00:03:22.199 Compiler for C supports arguments -mavx512vl: YES 00:03:22.199 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:22.199 Compiler for C supports arguments -mavx2: YES 00:03:22.199 Compiler for C supports arguments -mavx: YES 00:03:22.199 Message: lib/net: Defining dependency "net" 00:03:22.199 Message: lib/meter: Defining dependency "meter" 00:03:22.199 Message: lib/ethdev: Defining dependency "ethdev" 00:03:22.199 Message: lib/pci: Defining dependency "pci" 00:03:22.199 Message: lib/cmdline: Defining dependency "cmdline" 00:03:22.199 Message: lib/hash: Defining dependency "hash" 00:03:22.199 Message: lib/timer: Defining dependency "timer" 00:03:22.199 Message: lib/compressdev: Defining dependency "compressdev" 00:03:22.199 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:22.199 Message: lib/dmadev: Defining dependency "dmadev" 00:03:22.199 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:22.199 Message: lib/power: Defining dependency "power" 00:03:22.199 Message: lib/reorder: Defining dependency "reorder" 00:03:22.199 Message: lib/security: Defining dependency "security" 00:03:22.199 Has header "linux/userfaultfd.h" : YES 00:03:22.199 Has header "linux/vduse.h" : YES 00:03:22.199 Message: lib/vhost: Defining dependency "vhost" 00:03:22.199 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:22.199 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:22.199 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:22.199 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:22.199 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:22.199 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:22.199 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:22.199 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:22.199 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:22.199 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:22.199 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:22.199 Configuring doxy-api-html.conf using configuration 00:03:22.199 Configuring doxy-api-man.conf using configuration 00:03:22.199 Program mandb found: YES (/usr/bin/mandb) 00:03:22.199 Program sphinx-build found: NO 00:03:22.199 Configuring rte_build_config.h using configuration 00:03:22.199 Message: 00:03:22.199 ================= 00:03:22.199 Applications Enabled 00:03:22.199 ================= 00:03:22.199 00:03:22.199 apps: 00:03:22.199 00:03:22.199 00:03:22.199 Message: 00:03:22.199 ================= 00:03:22.199 Libraries Enabled 00:03:22.199 ================= 00:03:22.199 00:03:22.199 libs: 00:03:22.199 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:22.199 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:22.199 cryptodev, dmadev, power, reorder, security, vhost, 00:03:22.199 00:03:22.199 Message: 00:03:22.199 =============== 00:03:22.199 Drivers Enabled 00:03:22.199 =============== 00:03:22.199 00:03:22.199 common: 00:03:22.199 00:03:22.199 bus: 00:03:22.199 pci, vdev, 00:03:22.199 mempool: 00:03:22.199 ring, 00:03:22.199 dma: 00:03:22.199 00:03:22.199 net: 00:03:22.199 00:03:22.199 crypto: 00:03:22.199 00:03:22.199 compress: 00:03:22.199 00:03:22.199 vdpa: 00:03:22.199 00:03:22.199 00:03:22.199 Message: 00:03:22.199 ================= 00:03:22.199 Content Skipped 00:03:22.199 ================= 00:03:22.199 00:03:22.199 apps: 00:03:22.199 dumpcap: explicitly disabled via build config 00:03:22.199 graph: explicitly disabled via build config 00:03:22.199 pdump: explicitly disabled via build config 00:03:22.199 proc-info: explicitly disabled via build config 00:03:22.199 test-acl: explicitly disabled via build config 00:03:22.199 test-bbdev: explicitly disabled via build config 00:03:22.199 test-cmdline: explicitly disabled via build config 00:03:22.199 test-compress-perf: explicitly disabled via build config 00:03:22.199 test-crypto-perf: explicitly disabled via build config 00:03:22.199 test-dma-perf: explicitly disabled via build config 00:03:22.199 test-eventdev: explicitly disabled via build config 00:03:22.199 test-fib: explicitly disabled via build config 00:03:22.199 test-flow-perf: explicitly disabled via build config 00:03:22.199 test-gpudev: explicitly disabled via build config 00:03:22.199 test-mldev: explicitly disabled via build config 00:03:22.199 test-pipeline: explicitly disabled via build config 00:03:22.199 test-pmd: explicitly disabled via build config 00:03:22.199 test-regex: explicitly disabled via build config 00:03:22.199 test-sad: explicitly disabled via build config 00:03:22.199 test-security-perf: explicitly disabled via build config 00:03:22.199 00:03:22.199 libs: 00:03:22.199 argparse: explicitly disabled via build config 00:03:22.199 metrics: explicitly disabled via build config 00:03:22.199 acl: explicitly disabled via build config 00:03:22.199 bbdev: explicitly disabled via build config 00:03:22.199 bitratestats: explicitly disabled via build config 00:03:22.199 bpf: explicitly disabled via build config 00:03:22.199 cfgfile: explicitly disabled via build config 00:03:22.199 distributor: explicitly disabled via build config 00:03:22.199 efd: explicitly disabled via build config 00:03:22.199 eventdev: explicitly disabled via build config 00:03:22.199 dispatcher: explicitly disabled via build config 00:03:22.199 gpudev: explicitly disabled via build config 00:03:22.199 gro: explicitly disabled via build config 00:03:22.199 gso: explicitly disabled via build config 00:03:22.199 ip_frag: explicitly disabled via build config 00:03:22.199 jobstats: explicitly disabled via build config 00:03:22.199 latencystats: explicitly disabled via build config 00:03:22.199 lpm: explicitly disabled via build config 00:03:22.199 member: explicitly disabled via build config 00:03:22.199 pcapng: explicitly disabled via build config 00:03:22.199 rawdev: explicitly disabled via build config 00:03:22.199 regexdev: explicitly disabled via build config 00:03:22.199 mldev: explicitly disabled via build config 00:03:22.199 rib: explicitly disabled via build config 00:03:22.199 sched: explicitly disabled via build config 00:03:22.199 stack: explicitly disabled via build config 00:03:22.199 ipsec: explicitly disabled via build config 00:03:22.199 pdcp: explicitly disabled via build config 00:03:22.199 fib: explicitly disabled via build config 00:03:22.199 port: explicitly disabled via build config 00:03:22.199 pdump: explicitly disabled via build config 00:03:22.199 table: explicitly disabled via build config 00:03:22.199 pipeline: explicitly disabled via build config 00:03:22.199 graph: explicitly disabled via build config 00:03:22.199 node: explicitly disabled via build config 00:03:22.199 00:03:22.199 drivers: 00:03:22.199 common/cpt: not in enabled drivers build config 00:03:22.199 common/dpaax: not in enabled drivers build config 00:03:22.199 common/iavf: not in enabled drivers build config 00:03:22.199 common/idpf: not in enabled drivers build config 00:03:22.199 common/ionic: not in enabled drivers build config 00:03:22.199 common/mvep: not in enabled drivers build config 00:03:22.200 common/octeontx: not in enabled drivers build config 00:03:22.200 bus/auxiliary: not in enabled drivers build config 00:03:22.200 bus/cdx: not in enabled drivers build config 00:03:22.200 bus/dpaa: not in enabled drivers build config 00:03:22.200 bus/fslmc: not in enabled drivers build config 00:03:22.200 bus/ifpga: not in enabled drivers build config 00:03:22.200 bus/platform: not in enabled drivers build config 00:03:22.200 bus/uacce: not in enabled drivers build config 00:03:22.200 bus/vmbus: not in enabled drivers build config 00:03:22.200 common/cnxk: not in enabled drivers build config 00:03:22.200 common/mlx5: not in enabled drivers build config 00:03:22.200 common/nfp: not in enabled drivers build config 00:03:22.200 common/nitrox: not in enabled drivers build config 00:03:22.200 common/qat: not in enabled drivers build config 00:03:22.200 common/sfc_efx: not in enabled drivers build config 00:03:22.200 mempool/bucket: not in enabled drivers build config 00:03:22.200 mempool/cnxk: not in enabled drivers build config 00:03:22.200 mempool/dpaa: not in enabled drivers build config 00:03:22.200 mempool/dpaa2: not in enabled drivers build config 00:03:22.200 mempool/octeontx: not in enabled drivers build config 00:03:22.200 mempool/stack: not in enabled drivers build config 00:03:22.200 dma/cnxk: not in enabled drivers build config 00:03:22.200 dma/dpaa: not in enabled drivers build config 00:03:22.200 dma/dpaa2: not in enabled drivers build config 00:03:22.200 dma/hisilicon: not in enabled drivers build config 00:03:22.200 dma/idxd: not in enabled drivers build config 00:03:22.200 dma/ioat: not in enabled drivers build config 00:03:22.200 dma/skeleton: not in enabled drivers build config 00:03:22.200 net/af_packet: not in enabled drivers build config 00:03:22.200 net/af_xdp: not in enabled drivers build config 00:03:22.200 net/ark: not in enabled drivers build config 00:03:22.200 net/atlantic: not in enabled drivers build config 00:03:22.200 net/avp: not in enabled drivers build config 00:03:22.200 net/axgbe: not in enabled drivers build config 00:03:22.200 net/bnx2x: not in enabled drivers build config 00:03:22.200 net/bnxt: not in enabled drivers build config 00:03:22.200 net/bonding: not in enabled drivers build config 00:03:22.200 net/cnxk: not in enabled drivers build config 00:03:22.200 net/cpfl: not in enabled drivers build config 00:03:22.200 net/cxgbe: not in enabled drivers build config 00:03:22.200 net/dpaa: not in enabled drivers build config 00:03:22.200 net/dpaa2: not in enabled drivers build config 00:03:22.200 net/e1000: not in enabled drivers build config 00:03:22.200 net/ena: not in enabled drivers build config 00:03:22.200 net/enetc: not in enabled drivers build config 00:03:22.200 net/enetfec: not in enabled drivers build config 00:03:22.200 net/enic: not in enabled drivers build config 00:03:22.200 net/failsafe: not in enabled drivers build config 00:03:22.200 net/fm10k: not in enabled drivers build config 00:03:22.200 net/gve: not in enabled drivers build config 00:03:22.200 net/hinic: not in enabled drivers build config 00:03:22.200 net/hns3: not in enabled drivers build config 00:03:22.200 net/i40e: not in enabled drivers build config 00:03:22.200 net/iavf: not in enabled drivers build config 00:03:22.200 net/ice: not in enabled drivers build config 00:03:22.200 net/idpf: not in enabled drivers build config 00:03:22.200 net/igc: not in enabled drivers build config 00:03:22.200 net/ionic: not in enabled drivers build config 00:03:22.200 net/ipn3ke: not in enabled drivers build config 00:03:22.200 net/ixgbe: not in enabled drivers build config 00:03:22.200 net/mana: not in enabled drivers build config 00:03:22.200 net/memif: not in enabled drivers build config 00:03:22.200 net/mlx4: not in enabled drivers build config 00:03:22.200 net/mlx5: not in enabled drivers build config 00:03:22.200 net/mvneta: not in enabled drivers build config 00:03:22.200 net/mvpp2: not in enabled drivers build config 00:03:22.200 net/netvsc: not in enabled drivers build config 00:03:22.200 net/nfb: not in enabled drivers build config 00:03:22.200 net/nfp: not in enabled drivers build config 00:03:22.200 net/ngbe: not in enabled drivers build config 00:03:22.200 net/null: not in enabled drivers build config 00:03:22.200 net/octeontx: not in enabled drivers build config 00:03:22.200 net/octeon_ep: not in enabled drivers build config 00:03:22.200 net/pcap: not in enabled drivers build config 00:03:22.200 net/pfe: not in enabled drivers build config 00:03:22.200 net/qede: not in enabled drivers build config 00:03:22.200 net/ring: not in enabled drivers build config 00:03:22.200 net/sfc: not in enabled drivers build config 00:03:22.200 net/softnic: not in enabled drivers build config 00:03:22.200 net/tap: not in enabled drivers build config 00:03:22.200 net/thunderx: not in enabled drivers build config 00:03:22.200 net/txgbe: not in enabled drivers build config 00:03:22.200 net/vdev_netvsc: not in enabled drivers build config 00:03:22.200 net/vhost: not in enabled drivers build config 00:03:22.200 net/virtio: not in enabled drivers build config 00:03:22.200 net/vmxnet3: not in enabled drivers build config 00:03:22.200 raw/*: missing internal dependency, "rawdev" 00:03:22.200 crypto/armv8: not in enabled drivers build config 00:03:22.200 crypto/bcmfs: not in enabled drivers build config 00:03:22.200 crypto/caam_jr: not in enabled drivers build config 00:03:22.200 crypto/ccp: not in enabled drivers build config 00:03:22.200 crypto/cnxk: not in enabled drivers build config 00:03:22.200 crypto/dpaa_sec: not in enabled drivers build config 00:03:22.200 crypto/dpaa2_sec: not in enabled drivers build config 00:03:22.200 crypto/ipsec_mb: not in enabled drivers build config 00:03:22.200 crypto/mlx5: not in enabled drivers build config 00:03:22.200 crypto/mvsam: not in enabled drivers build config 00:03:22.200 crypto/nitrox: not in enabled drivers build config 00:03:22.200 crypto/null: not in enabled drivers build config 00:03:22.200 crypto/octeontx: not in enabled drivers build config 00:03:22.200 crypto/openssl: not in enabled drivers build config 00:03:22.200 crypto/scheduler: not in enabled drivers build config 00:03:22.200 crypto/uadk: not in enabled drivers build config 00:03:22.200 crypto/virtio: not in enabled drivers build config 00:03:22.200 compress/isal: not in enabled drivers build config 00:03:22.200 compress/mlx5: not in enabled drivers build config 00:03:22.200 compress/nitrox: not in enabled drivers build config 00:03:22.200 compress/octeontx: not in enabled drivers build config 00:03:22.200 compress/zlib: not in enabled drivers build config 00:03:22.200 regex/*: missing internal dependency, "regexdev" 00:03:22.200 ml/*: missing internal dependency, "mldev" 00:03:22.200 vdpa/ifc: not in enabled drivers build config 00:03:22.200 vdpa/mlx5: not in enabled drivers build config 00:03:22.200 vdpa/nfp: not in enabled drivers build config 00:03:22.200 vdpa/sfc: not in enabled drivers build config 00:03:22.200 event/*: missing internal dependency, "eventdev" 00:03:22.200 baseband/*: missing internal dependency, "bbdev" 00:03:22.200 gpu/*: missing internal dependency, "gpudev" 00:03:22.200 00:03:22.200 00:03:22.458 Build targets in project: 84 00:03:22.458 00:03:22.458 DPDK 24.03.0 00:03:22.458 00:03:22.458 User defined options 00:03:22.458 buildtype : debug 00:03:22.458 default_library : shared 00:03:22.458 libdir : lib 00:03:22.458 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:22.458 b_sanitize : address 00:03:22.458 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:22.458 c_link_args : 00:03:22.458 cpu_instruction_set: native 00:03:22.458 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:22.458 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:22.458 enable_docs : false 00:03:22.458 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:22.458 enable_kmods : false 00:03:22.458 max_lcores : 128 00:03:22.458 tests : false 00:03:22.458 00:03:22.458 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:23.022 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:23.280 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:23.280 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:23.280 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:23.280 [4/267] Linking static target lib/librte_kvargs.a 00:03:23.280 [5/267] Linking static target lib/librte_log.a 00:03:23.281 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:23.538 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:23.538 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:23.538 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:23.538 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.538 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:23.538 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:23.538 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:23.538 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:23.795 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:23.795 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:23.795 [17/267] Linking static target lib/librte_telemetry.a 00:03:23.795 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:24.053 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.053 [20/267] Linking target lib/librte_log.so.24.1 00:03:24.053 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:24.053 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:24.053 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:24.310 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:24.310 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:24.310 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:24.310 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:24.310 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:24.310 [29/267] Linking target lib/librte_kvargs.so.24.1 00:03:24.310 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:24.310 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:24.310 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:24.569 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:24.569 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.569 [35/267] Linking target lib/librte_telemetry.so.24.1 00:03:24.569 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:24.569 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:24.569 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:24.826 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:24.826 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:24.826 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:24.826 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:24.826 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:24.826 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:24.826 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:24.826 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:25.134 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:25.134 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:25.134 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:25.134 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:25.406 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:25.406 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:25.406 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:25.406 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:25.406 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:25.406 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:25.406 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:25.406 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:25.664 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:25.664 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:25.664 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:25.664 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:25.664 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:25.664 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:25.664 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:25.664 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:25.922 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:25.922 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:26.180 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:26.180 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:26.180 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:26.180 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:26.180 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:26.180 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:26.180 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:26.180 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:26.439 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:26.439 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:26.439 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.439 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:26.439 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:26.439 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:26.697 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:26.697 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:26.697 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:26.697 [86/267] Linking static target lib/librte_eal.a 00:03:26.697 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:26.956 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:26.956 [89/267] Linking static target lib/librte_ring.a 00:03:26.956 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:26.956 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.956 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:26.956 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:26.956 [94/267] Linking static target lib/librte_mempool.a 00:03:27.214 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:27.214 [96/267] Linking static target lib/librte_rcu.a 00:03:27.214 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:27.214 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:27.214 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:27.214 [100/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.214 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:27.471 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:27.471 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:27.471 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:27.471 [105/267] Linking static target lib/librte_net.a 00:03:27.471 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:27.471 [107/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.471 [108/267] Linking static target lib/librte_meter.a 00:03:27.729 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:27.729 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:27.729 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.987 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:27.987 [113/267] Linking static target lib/librte_mbuf.a 00:03:27.987 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.987 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.987 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.987 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:28.246 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.246 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:28.246 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:28.503 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.503 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.784 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:28.784 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:28.784 [125/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.784 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:28.784 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.784 [128/267] Linking static target lib/librte_pci.a 00:03:28.784 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.784 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.784 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:28.784 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:29.062 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:29.062 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:29.062 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:29.062 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:29.062 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:29.062 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:29.062 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:29.062 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:29.062 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:29.062 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:29.062 [143/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.062 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:29.062 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:29.062 [146/267] Linking static target lib/librte_cmdline.a 00:03:29.333 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:29.333 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:29.591 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:29.591 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:29.591 [151/267] Linking static target lib/librte_timer.a 00:03:29.591 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:29.591 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:29.848 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:29.848 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:29.848 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:29.848 [157/267] Linking static target lib/librte_compressdev.a 00:03:29.848 [158/267] Linking static target lib/librte_hash.a 00:03:29.848 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:29.848 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.848 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.848 [162/267] Linking static target lib/librte_ethdev.a 00:03:30.105 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:30.105 [164/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:30.105 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:30.105 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:30.105 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:30.105 [168/267] Linking static target lib/librte_dmadev.a 00:03:30.362 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:30.362 [170/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.362 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:30.620 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:30.620 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:30.620 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.620 [175/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:30.879 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:30.879 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:30.879 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:30.879 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.879 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:30.879 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.879 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:30.879 [183/267] Linking static target lib/librte_cryptodev.a 00:03:31.136 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:31.136 [185/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:31.136 [186/267] Linking static target lib/librte_reorder.a 00:03:31.136 [187/267] Linking static target lib/librte_power.a 00:03:31.136 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:31.136 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:31.393 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:31.649 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:31.649 [192/267] Linking static target lib/librte_security.a 00:03:31.649 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:31.649 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.906 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:31.907 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:32.163 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.163 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:32.163 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.163 [200/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.163 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.421 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.421 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:32.421 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.421 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:32.679 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:32.679 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:32.679 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:32.679 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:32.679 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.679 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:32.937 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:32.937 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.937 [214/267] Linking static target drivers/librte_bus_vdev.a 00:03:32.937 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.937 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:32.937 [217/267] Linking static target drivers/librte_bus_pci.a 00:03:32.937 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:32.937 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:32.937 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:32.937 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.194 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.194 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.194 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.194 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:33.194 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.759 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:34.690 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.690 [229/267] Linking target lib/librte_eal.so.24.1 00:03:34.690 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:34.690 [231/267] Linking target lib/librte_ring.so.24.1 00:03:34.690 [232/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:34.691 [233/267] Linking target lib/librte_timer.so.24.1 00:03:34.691 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:34.691 [235/267] Linking target lib/librte_meter.so.24.1 00:03:34.691 [236/267] Linking target lib/librte_pci.so.24.1 00:03:34.691 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:34.691 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:34.691 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:34.691 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:34.691 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:34.691 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:34.948 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:34.948 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:34.948 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:34.948 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:34.948 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:34.948 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:34.948 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:35.205 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:03:35.205 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:35.205 [252/267] Linking target lib/librte_net.so.24.1 00:03:35.205 [253/267] Linking target lib/librte_compressdev.so.24.1 00:03:35.205 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:35.205 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:35.205 [256/267] Linking target lib/librte_security.so.24.1 00:03:35.205 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:35.205 [258/267] Linking target lib/librte_hash.so.24.1 00:03:35.463 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:35.720 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.720 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:35.720 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:35.978 [263/267] Linking target lib/librte_power.so.24.1 00:03:36.910 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:36.910 [265/267] Linking static target lib/librte_vhost.a 00:03:38.291 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.291 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:38.291 INFO: autodetecting backend as ninja 00:03:38.291 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:53.288 CC lib/log/log_flags.o 00:03:53.288 CC lib/log/log_deprecated.o 00:03:53.288 CC lib/log/log.o 00:03:53.288 CC lib/ut/ut.o 00:03:53.288 CC lib/ut_mock/mock.o 00:03:53.288 LIB libspdk_log.a 00:03:53.288 LIB libspdk_ut.a 00:03:53.289 LIB libspdk_ut_mock.a 00:03:53.289 SO libspdk_ut.so.2.0 00:03:53.289 SO libspdk_log.so.7.1 00:03:53.289 SO libspdk_ut_mock.so.6.0 00:03:53.289 SYMLINK libspdk_ut.so 00:03:53.289 SYMLINK libspdk_ut_mock.so 00:03:53.289 SYMLINK libspdk_log.so 00:03:53.289 CXX lib/trace_parser/trace.o 00:03:53.289 CC lib/ioat/ioat.o 00:03:53.289 CC lib/dma/dma.o 00:03:53.289 CC lib/util/base64.o 00:03:53.289 CC lib/util/bit_array.o 00:03:53.289 CC lib/util/cpuset.o 00:03:53.289 CC lib/util/crc16.o 00:03:53.289 CC lib/util/crc32c.o 00:03:53.289 CC lib/util/crc32.o 00:03:53.289 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.289 CC lib/util/crc32_ieee.o 00:03:53.289 CC lib/vfio_user/host/vfio_user.o 00:03:53.289 CC lib/util/crc64.o 00:03:53.289 LIB libspdk_dma.a 00:03:53.289 CC lib/util/dif.o 00:03:53.289 SO libspdk_dma.so.5.0 00:03:53.289 CC lib/util/fd.o 00:03:53.289 CC lib/util/fd_group.o 00:03:53.289 SYMLINK libspdk_dma.so 00:03:53.289 CC lib/util/file.o 00:03:53.289 CC lib/util/hexlify.o 00:03:53.289 LIB libspdk_ioat.a 00:03:53.289 CC lib/util/iov.o 00:03:53.289 SO libspdk_ioat.so.7.0 00:03:53.289 CC lib/util/math.o 00:03:53.289 LIB libspdk_vfio_user.a 00:03:53.289 SYMLINK libspdk_ioat.so 00:03:53.289 CC lib/util/net.o 00:03:53.289 CC lib/util/pipe.o 00:03:53.289 CC lib/util/strerror_tls.o 00:03:53.289 SO libspdk_vfio_user.so.5.0 00:03:53.289 CC lib/util/string.o 00:03:53.289 SYMLINK libspdk_vfio_user.so 00:03:53.289 CC lib/util/uuid.o 00:03:53.289 CC lib/util/xor.o 00:03:53.289 CC lib/util/zipf.o 00:03:53.289 CC lib/util/md5.o 00:03:53.289 LIB libspdk_util.a 00:03:53.289 SO libspdk_util.so.10.1 00:03:53.289 LIB libspdk_trace_parser.a 00:03:53.289 SO libspdk_trace_parser.so.6.0 00:03:53.289 SYMLINK libspdk_util.so 00:03:53.289 SYMLINK libspdk_trace_parser.so 00:03:53.289 CC lib/json/json_parse.o 00:03:53.289 CC lib/json/json_util.o 00:03:53.289 CC lib/json/json_write.o 00:03:53.289 CC lib/rdma_utils/rdma_utils.o 00:03:53.289 CC lib/idxd/idxd.o 00:03:53.289 CC lib/env_dpdk/env.o 00:03:53.289 CC lib/idxd/idxd_user.o 00:03:53.289 CC lib/idxd/idxd_kernel.o 00:03:53.289 CC lib/conf/conf.o 00:03:53.289 CC lib/vmd/vmd.o 00:03:53.545 CC lib/vmd/led.o 00:03:53.545 CC lib/env_dpdk/memory.o 00:03:53.545 CC lib/env_dpdk/pci.o 00:03:53.545 CC lib/env_dpdk/init.o 00:03:53.545 LIB libspdk_rdma_utils.a 00:03:53.545 SO libspdk_rdma_utils.so.1.0 00:03:53.545 LIB libspdk_json.a 00:03:53.545 LIB libspdk_conf.a 00:03:53.545 CC lib/env_dpdk/threads.o 00:03:53.545 SYMLINK libspdk_rdma_utils.so 00:03:53.545 CC lib/env_dpdk/pci_ioat.o 00:03:53.802 SO libspdk_json.so.6.0 00:03:53.802 SO libspdk_conf.so.6.0 00:03:53.802 SYMLINK libspdk_conf.so 00:03:53.802 SYMLINK libspdk_json.so 00:03:53.802 CC lib/env_dpdk/pci_virtio.o 00:03:53.802 CC lib/env_dpdk/pci_vmd.o 00:03:53.802 CC lib/env_dpdk/pci_idxd.o 00:03:53.802 CC lib/env_dpdk/pci_event.o 00:03:53.802 CC lib/env_dpdk/sigbus_handler.o 00:03:53.802 CC lib/env_dpdk/pci_dpdk.o 00:03:53.802 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:53.802 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.059 LIB libspdk_idxd.a 00:03:54.059 SO libspdk_idxd.so.12.1 00:03:54.059 CC lib/rdma_provider/common.o 00:03:54.059 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:54.059 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.059 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.059 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.059 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.059 SYMLINK libspdk_idxd.so 00:03:54.059 LIB libspdk_vmd.a 00:03:54.059 SO libspdk_vmd.so.6.0 00:03:54.317 SYMLINK libspdk_vmd.so 00:03:54.317 LIB libspdk_jsonrpc.a 00:03:54.317 LIB libspdk_rdma_provider.a 00:03:54.317 SO libspdk_rdma_provider.so.7.0 00:03:54.317 SO libspdk_jsonrpc.so.6.0 00:03:54.317 SYMLINK libspdk_rdma_provider.so 00:03:54.317 SYMLINK libspdk_jsonrpc.so 00:03:54.573 CC lib/rpc/rpc.o 00:03:54.832 LIB libspdk_env_dpdk.a 00:03:54.832 LIB libspdk_rpc.a 00:03:54.832 SO libspdk_env_dpdk.so.15.1 00:03:54.832 SO libspdk_rpc.so.6.0 00:03:54.832 SYMLINK libspdk_rpc.so 00:03:55.105 SYMLINK libspdk_env_dpdk.so 00:03:55.105 CC lib/notify/notify.o 00:03:55.105 CC lib/notify/notify_rpc.o 00:03:55.105 CC lib/trace/trace.o 00:03:55.105 CC lib/trace/trace_flags.o 00:03:55.105 CC lib/trace/trace_rpc.o 00:03:55.105 CC lib/keyring/keyring.o 00:03:55.105 CC lib/keyring/keyring_rpc.o 00:03:55.105 LIB libspdk_notify.a 00:03:55.365 SO libspdk_notify.so.6.0 00:03:55.365 SYMLINK libspdk_notify.so 00:03:55.365 LIB libspdk_keyring.a 00:03:55.365 SO libspdk_keyring.so.2.0 00:03:55.365 LIB libspdk_trace.a 00:03:55.365 SO libspdk_trace.so.11.0 00:03:55.365 SYMLINK libspdk_keyring.so 00:03:55.365 SYMLINK libspdk_trace.so 00:03:55.625 CC lib/thread/thread.o 00:03:55.625 CC lib/thread/iobuf.o 00:03:55.625 CC lib/sock/sock.o 00:03:55.625 CC lib/sock/sock_rpc.o 00:03:56.195 LIB libspdk_sock.a 00:03:56.195 SO libspdk_sock.so.10.0 00:03:56.195 SYMLINK libspdk_sock.so 00:03:56.453 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:56.453 CC lib/nvme/nvme_ctrlr.o 00:03:56.453 CC lib/nvme/nvme_ns_cmd.o 00:03:56.453 CC lib/nvme/nvme_ns.o 00:03:56.453 CC lib/nvme/nvme_pcie_common.o 00:03:56.453 CC lib/nvme/nvme_pcie.o 00:03:56.453 CC lib/nvme/nvme_fabric.o 00:03:56.453 CC lib/nvme/nvme.o 00:03:56.453 CC lib/nvme/nvme_qpair.o 00:03:57.019 CC lib/nvme/nvme_quirks.o 00:03:57.019 CC lib/nvme/nvme_transport.o 00:03:57.019 CC lib/nvme/nvme_discovery.o 00:03:57.019 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.277 LIB libspdk_thread.a 00:03:57.277 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.277 SO libspdk_thread.so.11.0 00:03:57.277 CC lib/nvme/nvme_tcp.o 00:03:57.277 CC lib/nvme/nvme_opal.o 00:03:57.277 SYMLINK libspdk_thread.so 00:03:57.277 CC lib/nvme/nvme_io_msg.o 00:03:57.277 CC lib/nvme/nvme_poll_group.o 00:03:57.537 CC lib/nvme/nvme_zns.o 00:03:57.537 CC lib/nvme/nvme_stubs.o 00:03:57.537 CC lib/nvme/nvme_auth.o 00:03:57.798 CC lib/nvme/nvme_cuse.o 00:03:57.798 CC lib/nvme/nvme_rdma.o 00:03:57.798 CC lib/accel/accel.o 00:03:58.058 CC lib/blob/blobstore.o 00:03:58.058 CC lib/blob/request.o 00:03:58.058 CC lib/blob/zeroes.o 00:03:58.058 CC lib/blob/blob_bs_dev.o 00:03:58.318 CC lib/init/json_config.o 00:03:58.318 CC lib/init/subsystem.o 00:03:58.318 CC lib/virtio/virtio.o 00:03:58.318 CC lib/virtio/virtio_vhost_user.o 00:03:58.577 CC lib/virtio/virtio_vfio_user.o 00:03:58.577 CC lib/init/subsystem_rpc.o 00:03:58.577 CC lib/init/rpc.o 00:03:58.577 CC lib/accel/accel_rpc.o 00:03:58.577 LIB libspdk_init.a 00:03:58.577 CC lib/accel/accel_sw.o 00:03:58.904 CC lib/virtio/virtio_pci.o 00:03:58.904 SO libspdk_init.so.6.0 00:03:58.904 CC lib/fsdev/fsdev.o 00:03:58.904 CC lib/fsdev/fsdev_io.o 00:03:58.904 CC lib/fsdev/fsdev_rpc.o 00:03:58.904 SYMLINK libspdk_init.so 00:03:58.904 CC lib/event/app.o 00:03:58.904 CC lib/event/log_rpc.o 00:03:58.904 CC lib/event/reactor.o 00:03:58.904 LIB libspdk_virtio.a 00:03:58.904 CC lib/event/app_rpc.o 00:03:58.904 SO libspdk_virtio.so.7.0 00:03:59.163 LIB libspdk_nvme.a 00:03:59.163 SYMLINK libspdk_virtio.so 00:03:59.163 CC lib/event/scheduler_static.o 00:03:59.163 LIB libspdk_accel.a 00:03:59.163 SO libspdk_nvme.so.15.0 00:03:59.163 SO libspdk_accel.so.16.0 00:03:59.163 SYMLINK libspdk_accel.so 00:03:59.424 LIB libspdk_fsdev.a 00:03:59.424 SO libspdk_fsdev.so.2.0 00:03:59.424 LIB libspdk_event.a 00:03:59.424 CC lib/bdev/bdev_rpc.o 00:03:59.424 CC lib/bdev/bdev_zone.o 00:03:59.424 CC lib/bdev/bdev.o 00:03:59.424 SYMLINK libspdk_fsdev.so 00:03:59.424 CC lib/bdev/part.o 00:03:59.424 SYMLINK libspdk_nvme.so 00:03:59.424 CC lib/bdev/scsi_nvme.o 00:03:59.424 SO libspdk_event.so.14.0 00:03:59.424 SYMLINK libspdk_event.so 00:03:59.685 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:00.253 LIB libspdk_fuse_dispatcher.a 00:04:00.253 SO libspdk_fuse_dispatcher.so.1.0 00:04:00.253 SYMLINK libspdk_fuse_dispatcher.so 00:04:01.188 LIB libspdk_blob.a 00:04:01.188 SO libspdk_blob.so.11.0 00:04:01.447 SYMLINK libspdk_blob.so 00:04:01.706 CC lib/blobfs/blobfs.o 00:04:01.706 CC lib/blobfs/tree.o 00:04:01.706 CC lib/lvol/lvol.o 00:04:02.269 LIB libspdk_bdev.a 00:04:02.269 SO libspdk_bdev.so.17.0 00:04:02.568 LIB libspdk_blobfs.a 00:04:02.568 SO libspdk_blobfs.so.10.0 00:04:02.568 SYMLINK libspdk_bdev.so 00:04:02.568 SYMLINK libspdk_blobfs.so 00:04:02.568 LIB libspdk_lvol.a 00:04:02.568 SO libspdk_lvol.so.10.0 00:04:02.568 SYMLINK libspdk_lvol.so 00:04:02.568 CC lib/scsi/dev.o 00:04:02.568 CC lib/scsi/lun.o 00:04:02.568 CC lib/scsi/scsi.o 00:04:02.568 CC lib/scsi/port.o 00:04:02.568 CC lib/scsi/scsi_bdev.o 00:04:02.568 CC lib/nvmf/ctrlr.o 00:04:02.568 CC lib/nbd/nbd.o 00:04:02.568 CC lib/nbd/nbd_rpc.o 00:04:02.568 CC lib/ublk/ublk.o 00:04:02.569 CC lib/ftl/ftl_core.o 00:04:02.826 CC lib/ftl/ftl_init.o 00:04:02.826 CC lib/ftl/ftl_layout.o 00:04:02.826 CC lib/ublk/ublk_rpc.o 00:04:02.826 CC lib/scsi/scsi_pr.o 00:04:02.826 CC lib/nvmf/ctrlr_discovery.o 00:04:02.826 CC lib/nvmf/ctrlr_bdev.o 00:04:02.826 CC lib/ftl/ftl_debug.o 00:04:03.084 CC lib/ftl/ftl_io.o 00:04:03.084 LIB libspdk_nbd.a 00:04:03.084 SO libspdk_nbd.so.7.0 00:04:03.084 CC lib/ftl/ftl_sb.o 00:04:03.084 SYMLINK libspdk_nbd.so 00:04:03.084 CC lib/nvmf/subsystem.o 00:04:03.084 CC lib/ftl/ftl_l2p.o 00:04:03.084 CC lib/scsi/scsi_rpc.o 00:04:03.341 CC lib/ftl/ftl_l2p_flat.o 00:04:03.341 CC lib/ftl/ftl_nv_cache.o 00:04:03.341 CC lib/scsi/task.o 00:04:03.341 CC lib/nvmf/nvmf.o 00:04:03.341 LIB libspdk_ublk.a 00:04:03.341 SO libspdk_ublk.so.3.0 00:04:03.341 CC lib/ftl/ftl_band.o 00:04:03.341 CC lib/nvmf/nvmf_rpc.o 00:04:03.341 SYMLINK libspdk_ublk.so 00:04:03.341 CC lib/nvmf/transport.o 00:04:03.341 CC lib/ftl/ftl_band_ops.o 00:04:03.341 LIB libspdk_scsi.a 00:04:03.598 SO libspdk_scsi.so.9.0 00:04:03.598 SYMLINK libspdk_scsi.so 00:04:03.598 CC lib/ftl/ftl_writer.o 00:04:03.598 CC lib/ftl/ftl_rq.o 00:04:03.856 CC lib/ftl/ftl_reloc.o 00:04:03.856 CC lib/ftl/ftl_l2p_cache.o 00:04:03.856 CC lib/iscsi/conn.o 00:04:03.856 CC lib/vhost/vhost.o 00:04:04.113 CC lib/vhost/vhost_rpc.o 00:04:04.113 CC lib/vhost/vhost_scsi.o 00:04:04.113 CC lib/vhost/vhost_blk.o 00:04:04.113 CC lib/vhost/rte_vhost_user.o 00:04:04.370 CC lib/iscsi/init_grp.o 00:04:04.370 CC lib/iscsi/iscsi.o 00:04:04.370 CC lib/ftl/ftl_p2l.o 00:04:04.370 CC lib/nvmf/tcp.o 00:04:04.371 CC lib/nvmf/stubs.o 00:04:04.371 CC lib/nvmf/mdns_server.o 00:04:04.629 CC lib/nvmf/rdma.o 00:04:04.886 CC lib/ftl/ftl_p2l_log.o 00:04:04.886 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.886 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.886 CC lib/nvmf/auth.o 00:04:04.886 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:05.144 CC lib/iscsi/param.o 00:04:05.144 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:05.144 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.144 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.144 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.144 LIB libspdk_vhost.a 00:04:05.144 SO libspdk_vhost.so.8.0 00:04:05.144 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.401 SYMLINK libspdk_vhost.so 00:04:05.401 CC lib/iscsi/portal_grp.o 00:04:05.401 CC lib/iscsi/tgt_node.o 00:04:05.401 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.401 CC lib/iscsi/iscsi_subsystem.o 00:04:05.401 CC lib/iscsi/iscsi_rpc.o 00:04:05.401 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.659 CC lib/iscsi/task.o 00:04:05.659 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.659 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.659 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.659 CC lib/ftl/utils/ftl_conf.o 00:04:05.659 CC lib/ftl/utils/ftl_md.o 00:04:05.659 CC lib/ftl/utils/ftl_mempool.o 00:04:05.916 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.916 CC lib/ftl/utils/ftl_property.o 00:04:05.916 LIB libspdk_iscsi.a 00:04:05.916 SO libspdk_iscsi.so.8.0 00:04:05.916 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.916 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.916 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.916 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.916 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:06.231 SYMLINK libspdk_iscsi.so 00:04:06.231 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:06.231 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:06.231 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:06.231 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:06.231 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:06.231 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:06.231 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:06.231 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:06.231 CC lib/ftl/base/ftl_base_dev.o 00:04:06.231 CC lib/ftl/base/ftl_base_bdev.o 00:04:06.231 CC lib/ftl/ftl_trace.o 00:04:06.489 LIB libspdk_ftl.a 00:04:06.746 SO libspdk_ftl.so.9.0 00:04:06.746 LIB libspdk_nvmf.a 00:04:06.746 SYMLINK libspdk_ftl.so 00:04:07.004 SO libspdk_nvmf.so.20.0 00:04:07.004 SYMLINK libspdk_nvmf.so 00:04:07.261 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.261 CC module/accel/ioat/accel_ioat.o 00:04:07.261 CC module/sock/posix/posix.o 00:04:07.261 CC module/keyring/linux/keyring.o 00:04:07.261 CC module/keyring/file/keyring.o 00:04:07.518 CC module/accel/dsa/accel_dsa.o 00:04:07.518 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.518 CC module/accel/error/accel_error.o 00:04:07.518 CC module/blob/bdev/blob_bdev.o 00:04:07.518 CC module/fsdev/aio/fsdev_aio.o 00:04:07.518 LIB libspdk_env_dpdk_rpc.a 00:04:07.518 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.518 CC module/keyring/linux/keyring_rpc.o 00:04:07.518 CC module/keyring/file/keyring_rpc.o 00:04:07.518 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.518 CC module/accel/error/accel_error_rpc.o 00:04:07.518 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.518 LIB libspdk_scheduler_dynamic.a 00:04:07.518 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.518 LIB libspdk_keyring_linux.a 00:04:07.518 LIB libspdk_keyring_file.a 00:04:07.776 LIB libspdk_accel_error.a 00:04:07.776 SO libspdk_keyring_file.so.2.0 00:04:07.776 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.776 SO libspdk_keyring_linux.so.1.0 00:04:07.776 LIB libspdk_blob_bdev.a 00:04:07.776 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.776 LIB libspdk_accel_ioat.a 00:04:07.776 SO libspdk_accel_error.so.2.0 00:04:07.776 SO libspdk_blob_bdev.so.11.0 00:04:07.776 SYMLINK libspdk_keyring_file.so 00:04:07.776 SO libspdk_accel_ioat.so.6.0 00:04:07.776 SYMLINK libspdk_keyring_linux.so 00:04:07.776 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:07.776 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.776 SYMLINK libspdk_accel_error.so 00:04:07.776 CC module/fsdev/aio/linux_aio_mgr.o 00:04:07.776 SYMLINK libspdk_blob_bdev.so 00:04:07.776 SYMLINK libspdk_accel_ioat.so 00:04:07.776 LIB libspdk_accel_dsa.a 00:04:07.776 CC module/scheduler/gscheduler/gscheduler.o 00:04:07.776 SO libspdk_accel_dsa.so.5.0 00:04:07.776 CC module/accel/iaa/accel_iaa.o 00:04:07.776 SYMLINK libspdk_accel_dsa.so 00:04:07.776 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.776 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.034 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.034 LIB libspdk_scheduler_gscheduler.a 00:04:08.034 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.034 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.034 CC module/bdev/delay/vbdev_delay.o 00:04:08.034 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.034 LIB libspdk_sock_posix.a 00:04:08.034 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.034 CC module/bdev/error/vbdev_error.o 00:04:08.034 SO libspdk_sock_posix.so.6.0 00:04:08.034 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.034 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.034 LIB libspdk_accel_iaa.a 00:04:08.034 SO libspdk_accel_iaa.so.3.0 00:04:08.034 CC module/bdev/gpt/gpt.o 00:04:08.034 SYMLINK libspdk_sock_posix.so 00:04:08.034 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.034 LIB libspdk_fsdev_aio.a 00:04:08.034 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.034 SO libspdk_fsdev_aio.so.1.0 00:04:08.034 SYMLINK libspdk_accel_iaa.so 00:04:08.034 LIB libspdk_blobfs_bdev.a 00:04:08.034 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.293 SO libspdk_blobfs_bdev.so.6.0 00:04:08.293 SYMLINK libspdk_fsdev_aio.so 00:04:08.293 SYMLINK libspdk_blobfs_bdev.so 00:04:08.293 LIB libspdk_bdev_error.a 00:04:08.293 SO libspdk_bdev_error.so.6.0 00:04:08.293 CC module/bdev/malloc/bdev_malloc.o 00:04:08.293 CC module/bdev/null/bdev_null.o 00:04:08.293 LIB libspdk_bdev_delay.a 00:04:08.293 SYMLINK libspdk_bdev_error.so 00:04:08.293 CC module/bdev/null/bdev_null_rpc.o 00:04:08.293 CC module/bdev/nvme/bdev_nvme.o 00:04:08.293 SO libspdk_bdev_delay.so.6.0 00:04:08.293 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.293 CC module/bdev/raid/bdev_raid.o 00:04:08.293 CC module/bdev/split/vbdev_split.o 00:04:08.293 LIB libspdk_bdev_gpt.a 00:04:08.550 SO libspdk_bdev_gpt.so.6.0 00:04:08.550 SYMLINK libspdk_bdev_delay.so 00:04:08.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.550 SYMLINK libspdk_bdev_gpt.so 00:04:08.550 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.550 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.550 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.550 LIB libspdk_bdev_null.a 00:04:08.550 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.550 CC module/bdev/raid/raid0.o 00:04:08.550 SO libspdk_bdev_null.so.6.0 00:04:08.550 LIB libspdk_bdev_split.a 00:04:08.550 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.550 LIB libspdk_bdev_malloc.a 00:04:08.550 LIB libspdk_bdev_passthru.a 00:04:08.550 SO libspdk_bdev_split.so.6.0 00:04:08.550 SYMLINK libspdk_bdev_null.so 00:04:08.550 SO libspdk_bdev_malloc.so.6.0 00:04:08.808 SO libspdk_bdev_passthru.so.6.0 00:04:08.808 SYMLINK libspdk_bdev_split.so 00:04:08.808 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.808 SYMLINK libspdk_bdev_malloc.so 00:04:08.808 SYMLINK libspdk_bdev_passthru.so 00:04:08.808 CC module/bdev/nvme/nvme_rpc.o 00:04:08.808 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.808 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.808 CC module/bdev/xnvme/bdev_xnvme.o 00:04:08.808 CC module/bdev/aio/bdev_aio.o 00:04:09.077 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.077 CC module/bdev/nvme/vbdev_opal.o 00:04:09.077 LIB libspdk_bdev_lvol.a 00:04:09.077 SO libspdk_bdev_lvol.so.6.0 00:04:09.077 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:09.077 SYMLINK libspdk_bdev_lvol.so 00:04:09.077 LIB libspdk_bdev_zone_block.a 00:04:09.077 CC module/bdev/ftl/bdev_ftl.o 00:04:09.077 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:09.077 SO libspdk_bdev_zone_block.so.6.0 00:04:09.334 CC module/bdev/raid/raid1.o 00:04:09.334 SYMLINK libspdk_bdev_zone_block.so 00:04:09.334 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:09.334 LIB libspdk_bdev_aio.a 00:04:09.334 SO libspdk_bdev_aio.so.6.0 00:04:09.334 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.334 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.334 LIB libspdk_bdev_xnvme.a 00:04:09.334 SYMLINK libspdk_bdev_aio.so 00:04:09.334 CC module/bdev/raid/concat.o 00:04:09.334 SO libspdk_bdev_xnvme.so.3.0 00:04:09.334 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.334 SYMLINK libspdk_bdev_xnvme.so 00:04:09.591 LIB libspdk_bdev_ftl.a 00:04:09.591 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:09.591 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:09.591 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:09.591 LIB libspdk_bdev_raid.a 00:04:09.591 SO libspdk_bdev_ftl.so.6.0 00:04:09.591 SO libspdk_bdev_raid.so.6.0 00:04:09.591 SYMLINK libspdk_bdev_ftl.so 00:04:09.591 LIB libspdk_bdev_iscsi.a 00:04:09.591 SYMLINK libspdk_bdev_raid.so 00:04:09.591 SO libspdk_bdev_iscsi.so.6.0 00:04:09.848 SYMLINK libspdk_bdev_iscsi.so 00:04:10.106 LIB libspdk_bdev_virtio.a 00:04:10.106 SO libspdk_bdev_virtio.so.6.0 00:04:10.106 SYMLINK libspdk_bdev_virtio.so 00:04:11.038 LIB libspdk_bdev_nvme.a 00:04:11.038 SO libspdk_bdev_nvme.so.7.1 00:04:11.297 SYMLINK libspdk_bdev_nvme.so 00:04:11.555 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.555 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.555 CC module/event/subsystems/vmd/vmd.o 00:04:11.555 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.555 CC module/event/subsystems/fsdev/fsdev.o 00:04:11.555 CC module/event/subsystems/scheduler/scheduler.o 00:04:11.555 CC module/event/subsystems/sock/sock.o 00:04:11.555 CC module/event/subsystems/keyring/keyring.o 00:04:11.555 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.555 LIB libspdk_event_scheduler.a 00:04:11.555 LIB libspdk_event_fsdev.a 00:04:11.555 LIB libspdk_event_vmd.a 00:04:11.555 LIB libspdk_event_iobuf.a 00:04:11.555 LIB libspdk_event_sock.a 00:04:11.555 LIB libspdk_event_keyring.a 00:04:11.555 LIB libspdk_event_vhost_blk.a 00:04:11.555 SO libspdk_event_fsdev.so.1.0 00:04:11.555 SO libspdk_event_scheduler.so.4.0 00:04:11.812 SO libspdk_event_keyring.so.1.0 00:04:11.812 SO libspdk_event_iobuf.so.3.0 00:04:11.812 SO libspdk_event_vmd.so.6.0 00:04:11.812 SO libspdk_event_sock.so.5.0 00:04:11.812 SO libspdk_event_vhost_blk.so.3.0 00:04:11.812 SYMLINK libspdk_event_scheduler.so 00:04:11.812 SYMLINK libspdk_event_fsdev.so 00:04:11.812 SYMLINK libspdk_event_keyring.so 00:04:11.812 SYMLINK libspdk_event_sock.so 00:04:11.812 SYMLINK libspdk_event_iobuf.so 00:04:11.812 SYMLINK libspdk_event_vhost_blk.so 00:04:11.812 SYMLINK libspdk_event_vmd.so 00:04:12.069 CC module/event/subsystems/accel/accel.o 00:04:12.069 LIB libspdk_event_accel.a 00:04:12.069 SO libspdk_event_accel.so.6.0 00:04:12.069 SYMLINK libspdk_event_accel.so 00:04:12.326 CC module/event/subsystems/bdev/bdev.o 00:04:12.583 LIB libspdk_event_bdev.a 00:04:12.583 SO libspdk_event_bdev.so.6.0 00:04:12.583 SYMLINK libspdk_event_bdev.so 00:04:12.841 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.841 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:12.841 CC module/event/subsystems/ublk/ublk.o 00:04:12.841 CC module/event/subsystems/nbd/nbd.o 00:04:12.841 CC module/event/subsystems/scsi/scsi.o 00:04:12.841 LIB libspdk_event_scsi.a 00:04:12.841 LIB libspdk_event_ublk.a 00:04:12.841 LIB libspdk_event_nbd.a 00:04:12.841 SO libspdk_event_scsi.so.6.0 00:04:12.841 SO libspdk_event_ublk.so.3.0 00:04:12.841 SO libspdk_event_nbd.so.6.0 00:04:12.841 SYMLINK libspdk_event_ublk.so 00:04:12.841 SYMLINK libspdk_event_scsi.so 00:04:12.841 LIB libspdk_event_nvmf.a 00:04:12.841 SYMLINK libspdk_event_nbd.so 00:04:13.129 SO libspdk_event_nvmf.so.6.0 00:04:13.129 SYMLINK libspdk_event_nvmf.so 00:04:13.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.129 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.386 LIB libspdk_event_vhost_scsi.a 00:04:13.386 LIB libspdk_event_iscsi.a 00:04:13.386 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.386 SO libspdk_event_iscsi.so.6.0 00:04:13.386 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.386 SYMLINK libspdk_event_iscsi.so 00:04:13.386 SO libspdk.so.6.0 00:04:13.386 SYMLINK libspdk.so 00:04:13.644 TEST_HEADER include/spdk/accel.h 00:04:13.644 TEST_HEADER include/spdk/accel_module.h 00:04:13.644 CXX app/trace/trace.o 00:04:13.644 TEST_HEADER include/spdk/assert.h 00:04:13.644 CC test/rpc_client/rpc_client_test.o 00:04:13.644 TEST_HEADER include/spdk/barrier.h 00:04:13.644 TEST_HEADER include/spdk/base64.h 00:04:13.644 CC app/trace_record/trace_record.o 00:04:13.644 TEST_HEADER include/spdk/bdev.h 00:04:13.644 TEST_HEADER include/spdk/bdev_module.h 00:04:13.644 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.644 TEST_HEADER include/spdk/bit_array.h 00:04:13.644 TEST_HEADER include/spdk/bit_pool.h 00:04:13.644 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.644 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.644 TEST_HEADER include/spdk/blobfs.h 00:04:13.644 TEST_HEADER include/spdk/blob.h 00:04:13.644 TEST_HEADER include/spdk/conf.h 00:04:13.644 TEST_HEADER include/spdk/config.h 00:04:13.644 TEST_HEADER include/spdk/cpuset.h 00:04:13.644 TEST_HEADER include/spdk/crc16.h 00:04:13.644 TEST_HEADER include/spdk/crc32.h 00:04:13.644 TEST_HEADER include/spdk/crc64.h 00:04:13.644 TEST_HEADER include/spdk/dif.h 00:04:13.644 TEST_HEADER include/spdk/dma.h 00:04:13.644 TEST_HEADER include/spdk/endian.h 00:04:13.644 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.644 TEST_HEADER include/spdk/env.h 00:04:13.644 TEST_HEADER include/spdk/event.h 00:04:13.644 TEST_HEADER include/spdk/fd_group.h 00:04:13.644 TEST_HEADER include/spdk/fd.h 00:04:13.644 TEST_HEADER include/spdk/file.h 00:04:13.644 TEST_HEADER include/spdk/fsdev.h 00:04:13.644 TEST_HEADER include/spdk/fsdev_module.h 00:04:13.644 TEST_HEADER include/spdk/ftl.h 00:04:13.644 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:13.644 CC examples/util/zipf/zipf.o 00:04:13.644 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.644 TEST_HEADER include/spdk/hexlify.h 00:04:13.644 TEST_HEADER include/spdk/histogram_data.h 00:04:13.644 TEST_HEADER include/spdk/idxd.h 00:04:13.644 CC test/thread/poller_perf/poller_perf.o 00:04:13.644 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.644 TEST_HEADER include/spdk/init.h 00:04:13.644 TEST_HEADER include/spdk/ioat.h 00:04:13.644 CC examples/ioat/perf/perf.o 00:04:13.644 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.644 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.644 TEST_HEADER include/spdk/json.h 00:04:13.644 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.644 TEST_HEADER include/spdk/keyring.h 00:04:13.644 TEST_HEADER include/spdk/keyring_module.h 00:04:13.644 TEST_HEADER include/spdk/likely.h 00:04:13.644 TEST_HEADER include/spdk/log.h 00:04:13.644 TEST_HEADER include/spdk/lvol.h 00:04:13.644 CC test/dma/test_dma/test_dma.o 00:04:13.644 TEST_HEADER include/spdk/md5.h 00:04:13.644 TEST_HEADER include/spdk/memory.h 00:04:13.644 TEST_HEADER include/spdk/mmio.h 00:04:13.644 TEST_HEADER include/spdk/nbd.h 00:04:13.644 TEST_HEADER include/spdk/net.h 00:04:13.644 TEST_HEADER include/spdk/notify.h 00:04:13.644 TEST_HEADER include/spdk/nvme.h 00:04:13.644 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.644 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.644 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.644 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.644 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.644 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.644 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.644 TEST_HEADER include/spdk/nvmf.h 00:04:13.644 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.644 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.644 TEST_HEADER include/spdk/opal.h 00:04:13.644 TEST_HEADER include/spdk/opal_spec.h 00:04:13.644 CC test/app/bdev_svc/bdev_svc.o 00:04:13.644 TEST_HEADER include/spdk/pci_ids.h 00:04:13.902 TEST_HEADER include/spdk/pipe.h 00:04:13.902 TEST_HEADER include/spdk/queue.h 00:04:13.902 TEST_HEADER include/spdk/reduce.h 00:04:13.902 TEST_HEADER include/spdk/rpc.h 00:04:13.902 TEST_HEADER include/spdk/scheduler.h 00:04:13.902 TEST_HEADER include/spdk/scsi.h 00:04:13.902 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.902 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.902 TEST_HEADER include/spdk/sock.h 00:04:13.902 TEST_HEADER include/spdk/stdinc.h 00:04:13.902 TEST_HEADER include/spdk/string.h 00:04:13.902 TEST_HEADER include/spdk/thread.h 00:04:13.902 TEST_HEADER include/spdk/trace.h 00:04:13.902 TEST_HEADER include/spdk/trace_parser.h 00:04:13.902 TEST_HEADER include/spdk/tree.h 00:04:13.902 TEST_HEADER include/spdk/ublk.h 00:04:13.902 TEST_HEADER include/spdk/util.h 00:04:13.902 TEST_HEADER include/spdk/uuid.h 00:04:13.902 TEST_HEADER include/spdk/version.h 00:04:13.902 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.902 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.902 TEST_HEADER include/spdk/vhost.h 00:04:13.902 TEST_HEADER include/spdk/vmd.h 00:04:13.902 TEST_HEADER include/spdk/xor.h 00:04:13.902 TEST_HEADER include/spdk/zipf.h 00:04:13.902 CXX test/cpp_headers/accel.o 00:04:13.902 LINK rpc_client_test 00:04:13.902 LINK poller_perf 00:04:13.902 LINK zipf 00:04:13.902 LINK spdk_trace_record 00:04:13.902 LINK ioat_perf 00:04:13.902 LINK bdev_svc 00:04:13.902 CXX test/cpp_headers/accel_module.o 00:04:13.902 CXX test/cpp_headers/assert.o 00:04:13.902 CXX test/cpp_headers/barrier.o 00:04:13.902 CC test/env/vtophys/vtophys.o 00:04:14.160 LINK spdk_trace 00:04:14.160 LINK test_dma 00:04:14.160 CC examples/ioat/verify/verify.o 00:04:14.160 CXX test/cpp_headers/base64.o 00:04:14.160 LINK vtophys 00:04:14.160 CC test/app/histogram_perf/histogram_perf.o 00:04:14.160 LINK mem_callbacks 00:04:14.160 CC test/event/event_perf/event_perf.o 00:04:14.160 CC test/app/jsoncat/jsoncat.o 00:04:14.160 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.160 CXX test/cpp_headers/bdev.o 00:04:14.419 CC app/nvmf_tgt/nvmf_main.o 00:04:14.419 LINK jsoncat 00:04:14.419 LINK event_perf 00:04:14.419 LINK histogram_perf 00:04:14.419 LINK verify 00:04:14.419 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.419 CC test/app/stub/stub.o 00:04:14.419 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.419 CXX test/cpp_headers/bdev_module.o 00:04:14.419 CXX test/cpp_headers/bdev_zone.o 00:04:14.419 LINK nvmf_tgt 00:04:14.419 CC test/event/reactor/reactor.o 00:04:14.419 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.419 LINK stub 00:04:14.419 LINK env_dpdk_post_init 00:04:14.676 LINK nvme_fuzz 00:04:14.676 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.676 LINK reactor 00:04:14.676 CXX test/cpp_headers/bit_array.o 00:04:14.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.676 CXX test/cpp_headers/bit_pool.o 00:04:14.676 CXX test/cpp_headers/blob_bdev.o 00:04:14.676 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.677 LINK interrupt_tgt 00:04:14.677 CC test/env/memory/memory_ut.o 00:04:14.677 CC test/env/pci/pci_ut.o 00:04:14.677 CC test/event/reactor_perf/reactor_perf.o 00:04:14.934 CC app/spdk_lspci/spdk_lspci.o 00:04:14.934 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.934 LINK iscsi_tgt 00:04:14.934 CC app/spdk_tgt/spdk_tgt.o 00:04:14.934 LINK reactor_perf 00:04:14.934 LINK spdk_lspci 00:04:14.934 CXX test/cpp_headers/blobfs.o 00:04:14.934 CXX test/cpp_headers/blob.o 00:04:14.934 LINK vhost_fuzz 00:04:14.934 CC examples/thread/thread/thread_ex.o 00:04:15.192 LINK spdk_tgt 00:04:15.192 CC test/event/app_repeat/app_repeat.o 00:04:15.192 LINK pci_ut 00:04:15.192 CXX test/cpp_headers/conf.o 00:04:15.192 CXX test/cpp_headers/config.o 00:04:15.192 CC test/event/scheduler/scheduler.o 00:04:15.192 LINK app_repeat 00:04:15.192 LINK thread 00:04:15.192 CXX test/cpp_headers/cpuset.o 00:04:15.450 CC app/spdk_nvme_perf/perf.o 00:04:15.450 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.450 CC examples/sock/hello_world/hello_sock.o 00:04:15.450 CXX test/cpp_headers/crc16.o 00:04:15.450 CC examples/vmd/led/led.o 00:04:15.450 LINK scheduler 00:04:15.450 CXX test/cpp_headers/crc32.o 00:04:15.450 LINK lsvmd 00:04:15.450 CC examples/idxd/perf/perf.o 00:04:15.711 LINK led 00:04:15.711 CXX test/cpp_headers/crc64.o 00:04:15.711 CXX test/cpp_headers/dif.o 00:04:15.711 LINK hello_sock 00:04:15.711 CC examples/accel/perf/accel_perf.o 00:04:15.711 CXX test/cpp_headers/dma.o 00:04:15.711 CC examples/blob/hello_world/hello_blob.o 00:04:15.711 CC app/spdk_nvme_identify/identify.o 00:04:15.711 CC app/spdk_nvme_discover/discovery_aer.o 00:04:15.970 LINK memory_ut 00:04:15.970 LINK idxd_perf 00:04:15.970 CXX test/cpp_headers/endian.o 00:04:15.970 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:15.970 CXX test/cpp_headers/env_dpdk.o 00:04:15.970 LINK spdk_nvme_discover 00:04:15.970 LINK hello_blob 00:04:15.970 LINK iscsi_fuzz 00:04:15.970 CC app/spdk_top/spdk_top.o 00:04:16.227 CXX test/cpp_headers/env.o 00:04:16.227 CC examples/nvme/hello_world/hello_world.o 00:04:16.227 CXX test/cpp_headers/event.o 00:04:16.227 LINK hello_fsdev 00:04:16.227 LINK accel_perf 00:04:16.227 LINK spdk_nvme_perf 00:04:16.227 CXX test/cpp_headers/fd_group.o 00:04:16.227 CC examples/blob/cli/blobcli.o 00:04:16.227 CC examples/nvme/reconnect/reconnect.o 00:04:16.228 LINK hello_world 00:04:16.492 CXX test/cpp_headers/fd.o 00:04:16.492 CC app/vhost/vhost.o 00:04:16.492 CC app/spdk_dd/spdk_dd.o 00:04:16.492 CC app/fio/nvme/fio_plugin.o 00:04:16.492 CXX test/cpp_headers/file.o 00:04:16.492 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.492 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.750 LINK vhost 00:04:16.750 LINK spdk_nvme_identify 00:04:16.750 CXX test/cpp_headers/fsdev.o 00:04:16.750 LINK reconnect 00:04:16.750 LINK hello_bdev 00:04:16.750 CXX test/cpp_headers/fsdev_module.o 00:04:16.750 LINK blobcli 00:04:16.750 CXX test/cpp_headers/ftl.o 00:04:16.750 LINK spdk_dd 00:04:17.007 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:17.007 CXX test/cpp_headers/fuse_dispatcher.o 00:04:17.007 CC test/accel/dif/dif.o 00:04:17.007 CC examples/nvme/hotplug/hotplug.o 00:04:17.007 CC examples/nvme/arbitration/arbitration.o 00:04:17.007 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:17.007 LINK spdk_top 00:04:17.007 CC examples/nvme/abort/abort.o 00:04:17.007 LINK spdk_nvme 00:04:17.007 CXX test/cpp_headers/gpt_spec.o 00:04:17.265 LINK cmb_copy 00:04:17.265 CXX test/cpp_headers/hexlify.o 00:04:17.265 LINK hotplug 00:04:17.265 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:17.265 CC app/fio/bdev/fio_plugin.o 00:04:17.265 LINK arbitration 00:04:17.265 CXX test/cpp_headers/histogram_data.o 00:04:17.265 CXX test/cpp_headers/idxd.o 00:04:17.265 LINK abort 00:04:17.265 LINK nvme_manage 00:04:17.524 LINK pmr_persistence 00:04:17.524 CC test/blobfs/mkfs/mkfs.o 00:04:17.524 CXX test/cpp_headers/idxd_spec.o 00:04:17.524 CXX test/cpp_headers/init.o 00:04:17.524 LINK bdevperf 00:04:17.524 CXX test/cpp_headers/ioat.o 00:04:17.524 CXX test/cpp_headers/ioat_spec.o 00:04:17.524 CXX test/cpp_headers/iscsi_spec.o 00:04:17.524 CXX test/cpp_headers/json.o 00:04:17.524 CXX test/cpp_headers/jsonrpc.o 00:04:17.524 LINK mkfs 00:04:17.524 CC test/lvol/esnap/esnap.o 00:04:17.524 LINK dif 00:04:17.782 CXX test/cpp_headers/keyring.o 00:04:17.782 CXX test/cpp_headers/keyring_module.o 00:04:17.782 CXX test/cpp_headers/likely.o 00:04:17.782 LINK spdk_bdev 00:04:17.782 CXX test/cpp_headers/log.o 00:04:17.782 CXX test/cpp_headers/lvol.o 00:04:17.782 CXX test/cpp_headers/md5.o 00:04:17.782 CXX test/cpp_headers/memory.o 00:04:17.782 CC examples/nvmf/nvmf/nvmf.o 00:04:17.782 CC test/nvme/aer/aer.o 00:04:17.782 CC test/nvme/reset/reset.o 00:04:17.782 CXX test/cpp_headers/mmio.o 00:04:17.782 CXX test/cpp_headers/nbd.o 00:04:17.782 CC test/nvme/sgl/sgl.o 00:04:18.039 CXX test/cpp_headers/net.o 00:04:18.039 CC test/bdev/bdevio/bdevio.o 00:04:18.039 CXX test/cpp_headers/notify.o 00:04:18.039 CXX test/cpp_headers/nvme.o 00:04:18.039 CXX test/cpp_headers/nvme_intel.o 00:04:18.039 LINK nvmf 00:04:18.039 LINK aer 00:04:18.039 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.039 LINK reset 00:04:18.039 CC test/nvme/e2edp/nvme_dp.o 00:04:18.039 LINK sgl 00:04:18.296 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.296 CXX test/cpp_headers/nvme_spec.o 00:04:18.296 CXX test/cpp_headers/nvme_zns.o 00:04:18.296 CC test/nvme/overhead/overhead.o 00:04:18.296 CC test/nvme/err_injection/err_injection.o 00:04:18.296 CC test/nvme/startup/startup.o 00:04:18.296 LINK bdevio 00:04:18.296 CC test/nvme/reserve/reserve.o 00:04:18.296 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.296 CC test/nvme/simple_copy/simple_copy.o 00:04:18.296 LINK nvme_dp 00:04:18.554 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.554 LINK startup 00:04:18.554 LINK err_injection 00:04:18.554 CXX test/cpp_headers/nvmf.o 00:04:18.554 CXX test/cpp_headers/nvmf_spec.o 00:04:18.554 CXX test/cpp_headers/nvmf_transport.o 00:04:18.554 LINK overhead 00:04:18.554 LINK reserve 00:04:18.554 CXX test/cpp_headers/opal.o 00:04:18.554 LINK simple_copy 00:04:18.554 CXX test/cpp_headers/opal_spec.o 00:04:18.812 CC test/nvme/boot_partition/boot_partition.o 00:04:18.812 CC test/nvme/connect_stress/connect_stress.o 00:04:18.812 CC test/nvme/compliance/nvme_compliance.o 00:04:18.812 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.812 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.812 CXX test/cpp_headers/pci_ids.o 00:04:18.812 CC test/nvme/fdp/fdp.o 00:04:18.812 CC test/nvme/cuse/cuse.o 00:04:18.812 CXX test/cpp_headers/pipe.o 00:04:18.812 LINK boot_partition 00:04:18.812 LINK connect_stress 00:04:18.812 CXX test/cpp_headers/queue.o 00:04:18.812 LINK fused_ordering 00:04:18.812 CXX test/cpp_headers/reduce.o 00:04:19.070 LINK doorbell_aers 00:04:19.070 CXX test/cpp_headers/rpc.o 00:04:19.070 CXX test/cpp_headers/scheduler.o 00:04:19.070 CXX test/cpp_headers/scsi.o 00:04:19.070 CXX test/cpp_headers/scsi_spec.o 00:04:19.070 CXX test/cpp_headers/sock.o 00:04:19.070 CXX test/cpp_headers/stdinc.o 00:04:19.070 LINK nvme_compliance 00:04:19.070 CXX test/cpp_headers/string.o 00:04:19.070 CXX test/cpp_headers/thread.o 00:04:19.070 LINK fdp 00:04:19.070 CXX test/cpp_headers/trace.o 00:04:19.070 CXX test/cpp_headers/trace_parser.o 00:04:19.070 CXX test/cpp_headers/tree.o 00:04:19.331 CXX test/cpp_headers/ublk.o 00:04:19.331 CXX test/cpp_headers/util.o 00:04:19.331 CXX test/cpp_headers/uuid.o 00:04:19.331 CXX test/cpp_headers/version.o 00:04:19.331 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.331 CXX test/cpp_headers/vfio_user_spec.o 00:04:19.331 CXX test/cpp_headers/vhost.o 00:04:19.331 CXX test/cpp_headers/vmd.o 00:04:19.331 CXX test/cpp_headers/xor.o 00:04:19.331 CXX test/cpp_headers/zipf.o 00:04:20.262 LINK cuse 00:04:22.158 LINK esnap 00:04:22.418 00:04:22.418 real 1m10.640s 00:04:22.418 user 6m33.271s 00:04:22.418 sys 1m7.887s 00:04:22.418 15:52:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:22.418 15:52:20 make -- common/autotest_common.sh@10 -- $ set +x 00:04:22.418 ************************************ 00:04:22.418 END TEST make 00:04:22.418 ************************************ 00:04:22.418 15:52:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:22.418 15:52:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:22.418 15:52:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:22.418 15:52:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.418 15:52:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:22.418 15:52:20 -- pm/common@44 -- $ pid=5070 00:04:22.418 15:52:20 -- pm/common@50 -- $ kill -TERM 5070 00:04:22.418 15:52:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.418 15:52:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:22.418 15:52:20 -- pm/common@44 -- $ pid=5072 00:04:22.418 15:52:20 -- pm/common@50 -- $ kill -TERM 5072 00:04:22.418 15:52:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:22.418 15:52:20 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:22.678 15:52:20 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.678 15:52:20 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.678 15:52:20 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.678 15:52:20 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.678 15:52:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.678 15:52:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.678 15:52:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.678 15:52:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.678 15:52:20 -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.678 15:52:20 -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.678 15:52:20 -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.678 15:52:20 -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.678 15:52:20 -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.678 15:52:20 -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.678 15:52:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.678 15:52:20 -- scripts/common.sh@344 -- # case "$op" in 00:04:22.678 15:52:20 -- scripts/common.sh@345 -- # : 1 00:04:22.678 15:52:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.678 15:52:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.678 15:52:20 -- scripts/common.sh@365 -- # decimal 1 00:04:22.678 15:52:20 -- scripts/common.sh@353 -- # local d=1 00:04:22.678 15:52:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.678 15:52:20 -- scripts/common.sh@355 -- # echo 1 00:04:22.678 15:52:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.678 15:52:20 -- scripts/common.sh@366 -- # decimal 2 00:04:22.678 15:52:20 -- scripts/common.sh@353 -- # local d=2 00:04:22.678 15:52:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.678 15:52:20 -- scripts/common.sh@355 -- # echo 2 00:04:22.678 15:52:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.678 15:52:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.678 15:52:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.678 15:52:20 -- scripts/common.sh@368 -- # return 0 00:04:22.678 15:52:20 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.678 15:52:20 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.678 --rc genhtml_branch_coverage=1 00:04:22.678 --rc genhtml_function_coverage=1 00:04:22.678 --rc genhtml_legend=1 00:04:22.678 --rc geninfo_all_blocks=1 00:04:22.678 --rc geninfo_unexecuted_blocks=1 00:04:22.678 00:04:22.678 ' 00:04:22.678 15:52:20 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.678 --rc genhtml_branch_coverage=1 00:04:22.678 --rc genhtml_function_coverage=1 00:04:22.678 --rc genhtml_legend=1 00:04:22.678 --rc geninfo_all_blocks=1 00:04:22.678 --rc geninfo_unexecuted_blocks=1 00:04:22.678 00:04:22.678 ' 00:04:22.678 15:52:20 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.678 --rc genhtml_branch_coverage=1 00:04:22.678 --rc genhtml_function_coverage=1 00:04:22.678 --rc genhtml_legend=1 00:04:22.678 --rc geninfo_all_blocks=1 00:04:22.678 --rc geninfo_unexecuted_blocks=1 00:04:22.678 00:04:22.678 ' 00:04:22.678 15:52:20 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.678 --rc genhtml_branch_coverage=1 00:04:22.678 --rc genhtml_function_coverage=1 00:04:22.678 --rc genhtml_legend=1 00:04:22.678 --rc geninfo_all_blocks=1 00:04:22.678 --rc geninfo_unexecuted_blocks=1 00:04:22.678 00:04:22.678 ' 00:04:22.678 15:52:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.678 15:52:20 -- nvmf/common.sh@7 -- # uname -s 00:04:22.678 15:52:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.678 15:52:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.678 15:52:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.678 15:52:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.678 15:52:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.678 15:52:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.678 15:52:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.678 15:52:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.678 15:52:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.678 15:52:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.678 15:52:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:04:22.678 15:52:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:04:22.678 15:52:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.678 15:52:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.678 15:52:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.678 15:52:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.678 15:52:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.678 15:52:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.678 15:52:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.678 15:52:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.678 15:52:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.678 15:52:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.678 15:52:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.678 15:52:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.678 15:52:20 -- paths/export.sh@5 -- # export PATH 00:04:22.678 15:52:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.678 15:52:20 -- nvmf/common.sh@51 -- # : 0 00:04:22.678 15:52:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.678 15:52:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.678 15:52:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.678 15:52:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.678 15:52:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.678 15:52:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.678 15:52:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.678 15:52:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.678 15:52:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.678 15:52:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:22.678 15:52:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:22.678 15:52:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:22.678 15:52:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:22.678 15:52:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:22.678 15:52:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:22.678 15:52:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:22.678 15:52:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:22.678 15:52:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:22.678 15:52:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:22.678 15:52:20 -- spdk/autotest.sh@48 -- # udevadm_pid=54269 00:04:22.678 15:52:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:22.678 15:52:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:22.678 15:52:20 -- pm/common@17 -- # local monitor 00:04:22.678 15:52:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.678 15:52:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.678 15:52:20 -- pm/common@25 -- # sleep 1 00:04:22.678 15:52:20 -- pm/common@21 -- # date +%s 00:04:22.678 15:52:20 -- pm/common@21 -- # date +%s 00:04:22.678 15:52:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732117940 00:04:22.678 15:52:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732117940 00:04:22.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732117940_collect-vmstat.pm.log 00:04:22.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732117940_collect-cpu-load.pm.log 00:04:23.613 15:52:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:23.613 15:52:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:23.613 15:52:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.613 15:52:21 -- common/autotest_common.sh@10 -- # set +x 00:04:23.613 15:52:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:23.613 15:52:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:23.613 15:52:21 -- common/autotest_common.sh@10 -- # set +x 00:04:23.871 15:52:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:23.871 15:52:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:23.871 15:52:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:23.871 15:52:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:23.871 15:52:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:23.871 15:52:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:23.871 15:52:21 -- common/autotest_common.sh@1457 -- # uname 00:04:23.871 15:52:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:23.871 15:52:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:23.871 15:52:21 -- common/autotest_common.sh@1477 -- # uname 00:04:23.871 15:52:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:23.871 15:52:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:23.871 15:52:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:23.871 lcov: LCOV version 1.15 00:04:23.871 15:52:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:38.780 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:38.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:53.663 15:52:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:53.663 15:52:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.663 15:52:50 -- common/autotest_common.sh@10 -- # set +x 00:04:53.663 15:52:50 -- spdk/autotest.sh@78 -- # rm -f 00:04:53.663 15:52:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.663 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:53.663 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:53.663 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:53.663 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:53.663 15:52:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:53.663 15:52:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:53.663 15:52:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:53.663 15:52:51 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.663 15:52:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:53.663 15:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:53.663 15:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.663 15:52:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:53.663 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.663 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.663 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:53.663 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:53.663 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:53.663 No valid GPT data, bailing 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.663 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.663 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.663 1+0 records in 00:04:53.663 1+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112948 s, 92.8 MB/s 00:04:53.663 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.663 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.663 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:53.663 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:53.663 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:53.663 No valid GPT data, bailing 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.663 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.663 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:53.663 1+0 records in 00:04:53.663 1+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394749 s, 266 MB/s 00:04:53.663 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.663 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.663 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:53.663 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:53.663 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:53.663 No valid GPT data, bailing 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.663 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.663 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:53.663 1+0 records in 00:04:53.663 1+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447261 s, 234 MB/s 00:04:53.663 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.663 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.663 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:53.663 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:53.663 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:53.663 No valid GPT data, bailing 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:53.663 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.663 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.664 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:53.664 1+0 records in 00:04:53.664 1+0 records out 00:04:53.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393232 s, 267 MB/s 00:04:53.664 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.664 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.664 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:53.664 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:53.664 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:53.664 No valid GPT data, bailing 00:04:53.664 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:53.664 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.664 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.664 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:53.664 1+0 records in 00:04:53.664 1+0 records out 00:04:53.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443035 s, 237 MB/s 00:04:53.664 15:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.664 15:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.664 15:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:53.664 15:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:53.664 15:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:53.664 No valid GPT data, bailing 00:04:53.664 15:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:53.664 15:52:51 -- scripts/common.sh@394 -- # pt= 00:04:53.664 15:52:51 -- scripts/common.sh@395 -- # return 1 00:04:53.664 15:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:53.664 1+0 records in 00:04:53.664 1+0 records out 00:04:53.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485411 s, 216 MB/s 00:04:53.664 15:52:51 -- spdk/autotest.sh@105 -- # sync 00:04:53.664 15:52:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.664 15:52:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.664 15:52:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:55.139 15:52:53 -- spdk/autotest.sh@111 -- # uname -s 00:04:55.139 15:52:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:55.139 15:52:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:55.139 15:52:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.962 Hugepages 00:04:55.962 node hugesize free / total 00:04:55.962 node0 1048576kB 0 / 0 00:04:55.962 node0 2048kB 0 / 0 00:04:55.962 00:04:55.962 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.962 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.962 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:55.962 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:55.962 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:56.219 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:56.219 15:52:54 -- spdk/autotest.sh@117 -- # uname -s 00:04:56.219 15:52:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:56.219 15:52:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:56.220 15:52:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.040 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.040 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.040 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.040 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.040 15:52:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:57.971 15:52:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:57.971 15:52:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:57.971 15:52:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.971 15:52:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:57.971 15:52:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.971 15:52:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.971 15:52:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.971 15:52:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.971 15:52:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.229 15:52:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:58.229 15:52:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:58.229 15:52:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.486 Waiting for block devices as requested 00:04:58.486 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.744 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.744 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.744 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.000 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:04.000 15:53:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.000 15:53:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.000 15:53:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.000 15:53:01 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.000 15:53:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.000 15:53:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:04.000 15:53:01 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.000 15:53:01 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.000 15:53:01 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.000 15:53:01 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.000 15:53:01 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.000 15:53:01 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.000 15:53:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:04.000 15:53:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.000 15:53:01 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.000 15:53:01 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.000 15:53:01 -- common/autotest_common.sh@1543 -- # continue 00:05:04.000 15:53:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.000 15:53:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.000 15:53:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1543 -- # continue 00:05:04.000 15:53:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.000 15:53:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.000 15:53:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1543 -- # continue 00:05:04.000 15:53:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.000 15:53:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.000 15:53:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.000 15:53:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.000 15:53:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.000 15:53:02 -- common/autotest_common.sh@1543 -- # continue 00:05:04.000 15:53:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:04.000 15:53:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.000 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.000 15:53:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:04.000 15:53:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.000 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.000 15:53:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.823 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.081 15:53:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:05.081 15:53:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.081 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.081 15:53:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:05.081 15:53:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:05.081 15:53:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.081 15:53:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:05.081 15:53:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:05.081 15:53:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:05.081 15:53:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:05.081 15:53:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:05.081 15:53:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.081 15:53:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.081 15:53:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.081 15:53:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.081 15:53:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.081 15:53:03 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:05.081 15:53:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:05.081 15:53:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.081 15:53:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.081 15:53:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.081 15:53:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.081 15:53:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.081 15:53:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.081 15:53:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:05.081 15:53:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.081 15:53:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.081 15:53:03 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:05.081 15:53:03 -- common/autotest_common.sh@1572 -- # return 0 00:05:05.081 15:53:03 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:05.081 15:53:03 -- common/autotest_common.sh@1580 -- # return 0 00:05:05.081 15:53:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.081 15:53:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.081 15:53:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.082 15:53:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.082 15:53:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.082 15:53:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.082 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.082 15:53:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:05.082 15:53:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.082 15:53:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.082 15:53:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.082 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.082 ************************************ 00:05:05.082 START TEST env 00:05:05.082 ************************************ 00:05:05.082 15:53:03 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.082 * Looking for test storage... 00:05:05.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.082 15:53:03 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.082 15:53:03 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.082 15:53:03 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.340 15:53:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.340 15:53:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.340 15:53:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.340 15:53:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.340 15:53:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.340 15:53:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.340 15:53:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.340 15:53:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.340 15:53:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.340 15:53:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.340 15:53:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.340 15:53:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.340 15:53:03 env -- scripts/common.sh@345 -- # : 1 00:05:05.340 15:53:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.340 15:53:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.340 15:53:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.340 15:53:03 env -- scripts/common.sh@353 -- # local d=1 00:05:05.340 15:53:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.340 15:53:03 env -- scripts/common.sh@355 -- # echo 1 00:05:05.340 15:53:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.340 15:53:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.340 15:53:03 env -- scripts/common.sh@353 -- # local d=2 00:05:05.340 15:53:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.340 15:53:03 env -- scripts/common.sh@355 -- # echo 2 00:05:05.340 15:53:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.340 15:53:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.340 15:53:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.340 15:53:03 env -- scripts/common.sh@368 -- # return 0 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.340 --rc genhtml_branch_coverage=1 00:05:05.340 --rc genhtml_function_coverage=1 00:05:05.340 --rc genhtml_legend=1 00:05:05.340 --rc geninfo_all_blocks=1 00:05:05.340 --rc geninfo_unexecuted_blocks=1 00:05:05.340 00:05:05.340 ' 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.340 --rc genhtml_branch_coverage=1 00:05:05.340 --rc genhtml_function_coverage=1 00:05:05.340 --rc genhtml_legend=1 00:05:05.340 --rc geninfo_all_blocks=1 00:05:05.340 --rc geninfo_unexecuted_blocks=1 00:05:05.340 00:05:05.340 ' 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.340 --rc genhtml_branch_coverage=1 00:05:05.340 --rc genhtml_function_coverage=1 00:05:05.340 --rc genhtml_legend=1 00:05:05.340 --rc geninfo_all_blocks=1 00:05:05.340 --rc geninfo_unexecuted_blocks=1 00:05:05.340 00:05:05.340 ' 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.340 --rc genhtml_branch_coverage=1 00:05:05.340 --rc genhtml_function_coverage=1 00:05:05.340 --rc genhtml_legend=1 00:05:05.340 --rc geninfo_all_blocks=1 00:05:05.340 --rc geninfo_unexecuted_blocks=1 00:05:05.340 00:05:05.340 ' 00:05:05.340 15:53:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.340 15:53:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.340 15:53:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.340 ************************************ 00:05:05.340 START TEST env_memory 00:05:05.340 ************************************ 00:05:05.340 15:53:03 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.340 00:05:05.340 00:05:05.340 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.340 http://cunit.sourceforge.net/ 00:05:05.340 00:05:05.340 00:05:05.340 Suite: memory 00:05:05.340 Test: alloc and free memory map ...[2024-11-20 15:53:03.433194] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.340 passed 00:05:05.340 Test: mem map translation ...[2024-11-20 15:53:03.472228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.340 [2024-11-20 15:53:03.472390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.340 [2024-11-20 15:53:03.472501] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.340 [2024-11-20 15:53:03.472571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.340 passed 00:05:05.340 Test: mem map registration ...[2024-11-20 15:53:03.540897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.340 [2024-11-20 15:53:03.541042] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.340 passed 00:05:05.598 Test: mem map adjacent registrations ...passed 00:05:05.598 00:05:05.598 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.598 suites 1 1 n/a 0 0 00:05:05.598 tests 4 4 4 0 0 00:05:05.598 asserts 152 152 152 0 n/a 00:05:05.598 00:05:05.598 Elapsed time = 0.233 seconds 00:05:05.598 00:05:05.598 real 0m0.274s 00:05:05.598 user 0m0.247s 00:05:05.598 sys 0m0.018s 00:05:05.598 ************************************ 00:05:05.598 END TEST env_memory 00:05:05.598 15:53:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.598 15:53:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.598 ************************************ 00:05:05.598 15:53:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.598 15:53:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.598 15:53:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.598 15:53:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.598 ************************************ 00:05:05.598 START TEST env_vtophys 00:05:05.598 ************************************ 00:05:05.598 15:53:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.598 EAL: lib.eal log level changed from notice to debug 00:05:05.598 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.598 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.598 EAL: Maximum logical cores by configuration: 128 00:05:05.598 EAL: Detected CPU lcores: 10 00:05:05.598 EAL: Detected NUMA nodes: 1 00:05:05.598 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:05.598 EAL: Detected shared linkage of DPDK 00:05:05.598 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.598 EAL: Selected IOVA mode 'PA' 00:05:05.598 EAL: Probing VFIO support... 00:05:05.599 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.599 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.599 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.599 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.599 EAL: Setting up physically contiguous memory... 00:05:05.599 EAL: Setting maximum number of open files to 524288 00:05:05.599 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.599 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.599 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.599 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.599 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.599 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.599 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.599 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.599 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.599 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.599 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.599 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.599 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.599 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.599 EAL: Hugepages will be freed exactly as allocated. 00:05:05.599 EAL: No shared files mode enabled, IPC is disabled 00:05:05.599 EAL: No shared files mode enabled, IPC is disabled 00:05:05.599 EAL: TSC frequency is ~2600000 KHz 00:05:05.599 EAL: Main lcore 0 is ready (tid=7fbc84125a40;cpuset=[0]) 00:05:05.599 EAL: Trying to obtain current memory policy. 00:05:05.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.599 EAL: Restoring previous memory policy: 0 00:05:05.599 EAL: request: mp_malloc_sync 00:05:05.599 EAL: No shared files mode enabled, IPC is disabled 00:05:05.599 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.599 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.599 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.599 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.599 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:05.856 00:05:05.856 00:05:05.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.856 http://cunit.sourceforge.net/ 00:05:05.856 00:05:05.856 00:05:05.856 Suite: components_suite 00:05:06.114 Test: vtophys_malloc_test ...passed 00:05:06.114 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.114 EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.114 EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.114 EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.114 EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.114 EAL: Trying to obtain current memory policy. 00:05:06.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.114 EAL: Restoring previous memory policy: 4 00:05:06.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.114 EAL: request: mp_malloc_sync 00:05:06.114 EAL: No shared files mode enabled, IPC is disabled 00:05:06.114 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.373 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.373 EAL: request: mp_malloc_sync 00:05:06.373 EAL: No shared files mode enabled, IPC is disabled 00:05:06.373 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.373 EAL: Trying to obtain current memory policy. 00:05:06.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.373 EAL: Restoring previous memory policy: 4 00:05:06.373 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.373 EAL: request: mp_malloc_sync 00:05:06.373 EAL: No shared files mode enabled, IPC is disabled 00:05:06.373 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.630 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.631 EAL: request: mp_malloc_sync 00:05:06.631 EAL: No shared files mode enabled, IPC is disabled 00:05:06.631 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.631 EAL: Trying to obtain current memory policy. 00:05:06.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.631 EAL: Restoring previous memory policy: 4 00:05:06.631 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.631 EAL: request: mp_malloc_sync 00:05:06.631 EAL: No shared files mode enabled, IPC is disabled 00:05:06.631 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.888 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.146 EAL: request: mp_malloc_sync 00:05:07.146 EAL: No shared files mode enabled, IPC is disabled 00:05:07.146 EAL: Heap on socket 0 was shrunk by 258MB 00:05:07.404 EAL: Trying to obtain current memory policy. 00:05:07.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.404 EAL: Restoring previous memory policy: 4 00:05:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.404 EAL: request: mp_malloc_sync 00:05:07.404 EAL: No shared files mode enabled, IPC is disabled 00:05:07.404 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.969 EAL: request: mp_malloc_sync 00:05:07.969 EAL: No shared files mode enabled, IPC is disabled 00:05:07.969 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.549 EAL: Trying to obtain current memory policy. 00:05:08.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.808 EAL: Restoring previous memory policy: 4 00:05:08.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.808 EAL: request: mp_malloc_sync 00:05:08.808 EAL: No shared files mode enabled, IPC is disabled 00:05:08.808 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.997 EAL: request: mp_malloc_sync 00:05:09.997 EAL: No shared files mode enabled, IPC is disabled 00:05:09.997 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.930 passed 00:05:10.930 00:05:10.930 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.930 suites 1 1 n/a 0 0 00:05:10.930 tests 2 2 2 0 0 00:05:10.930 asserts 5712 5712 5712 0 n/a 00:05:10.930 00:05:10.930 Elapsed time = 5.113 seconds 00:05:10.930 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.930 EAL: request: mp_malloc_sync 00:05:10.930 EAL: No shared files mode enabled, IPC is disabled 00:05:10.930 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.930 EAL: No shared files mode enabled, IPC is disabled 00:05:10.930 EAL: No shared files mode enabled, IPC is disabled 00:05:10.930 EAL: No shared files mode enabled, IPC is disabled 00:05:10.930 ************************************ 00:05:10.930 END TEST env_vtophys 00:05:10.930 ************************************ 00:05:10.930 00:05:10.930 real 0m5.379s 00:05:10.930 user 0m4.554s 00:05:10.930 sys 0m0.666s 00:05:10.930 15:53:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.930 15:53:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.930 15:53:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.930 15:53:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.930 15:53:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.930 15:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.930 ************************************ 00:05:10.930 START TEST env_pci 00:05:10.930 ************************************ 00:05:10.930 15:53:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.930 00:05:10.930 00:05:10.930 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.930 http://cunit.sourceforge.net/ 00:05:10.930 00:05:10.930 00:05:10.930 Suite: pci 00:05:10.930 Test: pci_hook ...[2024-11-20 15:53:09.120531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57043 has claimed it 00:05:10.930 passed 00:05:10.930 00:05:10.930 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.930 suites 1 1 n/a 0 0 00:05:10.930 tests 1 1 1 0 0 00:05:10.930 asserts 25 25 25 0 n/a 00:05:10.930 00:05:10.930 Elapsed time = 0.006 seconds 00:05:10.930 EAL: Cannot find device (10000:00:01.0) 00:05:10.930 EAL: Failed to attach device on primary process 00:05:10.930 ************************************ 00:05:10.930 END TEST env_pci 00:05:10.930 ************************************ 00:05:10.930 00:05:10.930 real 0m0.063s 00:05:10.930 user 0m0.029s 00:05:10.930 sys 0m0.033s 00:05:10.930 15:53:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.930 15:53:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.188 15:53:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.188 15:53:09 env -- env/env.sh@15 -- # uname 00:05:11.188 15:53:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.188 15:53:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.188 15:53:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.188 15:53:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:11.188 15:53:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.188 15:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.188 ************************************ 00:05:11.188 START TEST env_dpdk_post_init 00:05:11.188 ************************************ 00:05:11.188 15:53:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.188 EAL: Detected CPU lcores: 10 00:05:11.188 EAL: Detected NUMA nodes: 1 00:05:11.188 EAL: Detected shared linkage of DPDK 00:05:11.188 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.188 EAL: Selected IOVA mode 'PA' 00:05:11.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.188 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:11.188 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:11.188 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:11.188 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:11.188 Starting DPDK initialization... 00:05:11.188 Starting SPDK post initialization... 00:05:11.188 SPDK NVMe probe 00:05:11.188 Attaching to 0000:00:10.0 00:05:11.188 Attaching to 0000:00:11.0 00:05:11.188 Attaching to 0000:00:12.0 00:05:11.188 Attaching to 0000:00:13.0 00:05:11.188 Attached to 0000:00:10.0 00:05:11.188 Attached to 0000:00:11.0 00:05:11.188 Attached to 0000:00:13.0 00:05:11.188 Attached to 0000:00:12.0 00:05:11.188 Cleaning up... 00:05:11.445 00:05:11.445 real 0m0.243s 00:05:11.445 user 0m0.086s 00:05:11.445 sys 0m0.058s 00:05:11.445 15:53:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.445 ************************************ 00:05:11.445 END TEST env_dpdk_post_init 00:05:11.445 ************************************ 00:05:11.445 15:53:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.445 15:53:09 env -- env/env.sh@26 -- # uname 00:05:11.445 15:53:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.445 15:53:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.445 15:53:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.445 15:53:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.445 15:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.445 ************************************ 00:05:11.445 START TEST env_mem_callbacks 00:05:11.445 ************************************ 00:05:11.445 15:53:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.445 EAL: Detected CPU lcores: 10 00:05:11.445 EAL: Detected NUMA nodes: 1 00:05:11.445 EAL: Detected shared linkage of DPDK 00:05:11.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.446 EAL: Selected IOVA mode 'PA' 00:05:11.446 00:05:11.446 00:05:11.446 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.446 http://cunit.sourceforge.net/ 00:05:11.446 00:05:11.446 00:05:11.446 Suite: memory 00:05:11.446 Test: test ... 00:05:11.446 register 0x200000200000 2097152 00:05:11.446 malloc 3145728 00:05:11.446 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.446 register 0x200000400000 4194304 00:05:11.446 buf 0x2000004fffc0 len 3145728 PASSED 00:05:11.446 malloc 64 00:05:11.446 buf 0x2000004ffec0 len 64 PASSED 00:05:11.446 malloc 4194304 00:05:11.446 register 0x200000800000 6291456 00:05:11.446 buf 0x2000009fffc0 len 4194304 PASSED 00:05:11.446 free 0x2000004fffc0 3145728 00:05:11.446 free 0x2000004ffec0 64 00:05:11.446 unregister 0x200000400000 4194304 PASSED 00:05:11.446 free 0x2000009fffc0 4194304 00:05:11.446 unregister 0x200000800000 6291456 PASSED 00:05:11.446 malloc 8388608 00:05:11.446 register 0x200000400000 10485760 00:05:11.446 buf 0x2000005fffc0 len 8388608 PASSED 00:05:11.446 free 0x2000005fffc0 8388608 00:05:11.446 unregister 0x200000400000 10485760 PASSED 00:05:11.446 passed 00:05:11.446 00:05:11.446 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.446 suites 1 1 n/a 0 0 00:05:11.446 tests 1 1 1 0 0 00:05:11.446 asserts 15 15 15 0 n/a 00:05:11.446 00:05:11.446 Elapsed time = 0.047 seconds 00:05:11.703 00:05:11.703 real 0m0.212s 00:05:11.703 user 0m0.066s 00:05:11.703 sys 0m0.044s 00:05:11.703 ************************************ 00:05:11.703 END TEST env_mem_callbacks 00:05:11.703 ************************************ 00:05:11.703 15:53:09 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.703 15:53:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.703 ************************************ 00:05:11.703 END TEST env 00:05:11.703 ************************************ 00:05:11.703 00:05:11.703 real 0m6.508s 00:05:11.703 user 0m5.142s 00:05:11.703 sys 0m0.998s 00:05:11.703 15:53:09 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.703 15:53:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.703 15:53:09 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.703 15:53:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.703 15:53:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.703 15:53:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.703 ************************************ 00:05:11.703 START TEST rpc 00:05:11.703 ************************************ 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.703 * Looking for test storage... 00:05:11.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.703 15:53:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.703 15:53:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.703 15:53:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.703 15:53:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.703 15:53:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.703 15:53:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.703 15:53:09 rpc -- scripts/common.sh@345 -- # : 1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.703 15:53:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.703 15:53:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.703 15:53:09 rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.703 15:53:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.703 15:53:09 rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.703 15:53:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.703 15:53:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.703 15:53:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.703 15:53:09 rpc -- scripts/common.sh@368 -- # return 0 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.703 --rc genhtml_branch_coverage=1 00:05:11.703 --rc genhtml_function_coverage=1 00:05:11.703 --rc genhtml_legend=1 00:05:11.703 --rc geninfo_all_blocks=1 00:05:11.703 --rc geninfo_unexecuted_blocks=1 00:05:11.703 00:05:11.703 ' 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.703 --rc genhtml_branch_coverage=1 00:05:11.703 --rc genhtml_function_coverage=1 00:05:11.703 --rc genhtml_legend=1 00:05:11.703 --rc geninfo_all_blocks=1 00:05:11.703 --rc geninfo_unexecuted_blocks=1 00:05:11.703 00:05:11.703 ' 00:05:11.703 15:53:09 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.703 --rc genhtml_branch_coverage=1 00:05:11.703 --rc genhtml_function_coverage=1 00:05:11.703 --rc genhtml_legend=1 00:05:11.703 --rc geninfo_all_blocks=1 00:05:11.704 --rc geninfo_unexecuted_blocks=1 00:05:11.704 00:05:11.704 ' 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.704 --rc genhtml_branch_coverage=1 00:05:11.704 --rc genhtml_function_coverage=1 00:05:11.704 --rc genhtml_legend=1 00:05:11.704 --rc geninfo_all_blocks=1 00:05:11.704 --rc geninfo_unexecuted_blocks=1 00:05:11.704 00:05:11.704 ' 00:05:11.704 15:53:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57165 00:05:11.704 15:53:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.704 15:53:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57165 00:05:11.704 15:53:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@835 -- # '[' -z 57165 ']' 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.704 15:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.961 [2024-11-20 15:53:09.973601] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:11.961 [2024-11-20 15:53:09.973709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57165 ] 00:05:11.961 [2024-11-20 15:53:10.128549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.311 [2024-11-20 15:53:10.229157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.311 [2024-11-20 15:53:10.229215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57165' to capture a snapshot of events at runtime. 00:05:12.311 [2024-11-20 15:53:10.229225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.311 [2024-11-20 15:53:10.229235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.311 [2024-11-20 15:53:10.229242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57165 for offline analysis/debug. 00:05:12.311 [2024-11-20 15:53:10.230110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.880 15:53:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.880 15:53:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.880 15:53:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.880 15:53:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.880 15:53:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.880 15:53:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.880 15:53:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.880 15:53:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.881 15:53:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 ************************************ 00:05:12.881 START TEST rpc_integrity 00:05:12.881 ************************************ 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.881 { 00:05:12.881 "name": "Malloc0", 00:05:12.881 "aliases": [ 00:05:12.881 "f299176a-3595-49fd-b650-ca4695cfe47a" 00:05:12.881 ], 00:05:12.881 "product_name": "Malloc disk", 00:05:12.881 "block_size": 512, 00:05:12.881 "num_blocks": 16384, 00:05:12.881 "uuid": "f299176a-3595-49fd-b650-ca4695cfe47a", 00:05:12.881 "assigned_rate_limits": { 00:05:12.881 "rw_ios_per_sec": 0, 00:05:12.881 "rw_mbytes_per_sec": 0, 00:05:12.881 "r_mbytes_per_sec": 0, 00:05:12.881 "w_mbytes_per_sec": 0 00:05:12.881 }, 00:05:12.881 "claimed": false, 00:05:12.881 "zoned": false, 00:05:12.881 "supported_io_types": { 00:05:12.881 "read": true, 00:05:12.881 "write": true, 00:05:12.881 "unmap": true, 00:05:12.881 "flush": true, 00:05:12.881 "reset": true, 00:05:12.881 "nvme_admin": false, 00:05:12.881 "nvme_io": false, 00:05:12.881 "nvme_io_md": false, 00:05:12.881 "write_zeroes": true, 00:05:12.881 "zcopy": true, 00:05:12.881 "get_zone_info": false, 00:05:12.881 "zone_management": false, 00:05:12.881 "zone_append": false, 00:05:12.881 "compare": false, 00:05:12.881 "compare_and_write": false, 00:05:12.881 "abort": true, 00:05:12.881 "seek_hole": false, 00:05:12.881 "seek_data": false, 00:05:12.881 "copy": true, 00:05:12.881 "nvme_iov_md": false 00:05:12.881 }, 00:05:12.881 "memory_domains": [ 00:05:12.881 { 00:05:12.881 "dma_device_id": "system", 00:05:12.881 "dma_device_type": 1 00:05:12.881 }, 00:05:12.881 { 00:05:12.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.881 "dma_device_type": 2 00:05:12.881 } 00:05:12.881 ], 00:05:12.881 "driver_specific": {} 00:05:12.881 } 00:05:12.881 ]' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 [2024-11-20 15:53:10.934607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.881 [2024-11-20 15:53:10.934804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.881 [2024-11-20 15:53:10.934904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:12.881 [2024-11-20 15:53:10.934924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.881 [2024-11-20 15:53:10.937175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.881 [2024-11-20 15:53:10.937308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.881 Passthru0 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.881 { 00:05:12.881 "name": "Malloc0", 00:05:12.881 "aliases": [ 00:05:12.881 "f299176a-3595-49fd-b650-ca4695cfe47a" 00:05:12.881 ], 00:05:12.881 "product_name": "Malloc disk", 00:05:12.881 "block_size": 512, 00:05:12.881 "num_blocks": 16384, 00:05:12.881 "uuid": "f299176a-3595-49fd-b650-ca4695cfe47a", 00:05:12.881 "assigned_rate_limits": { 00:05:12.881 "rw_ios_per_sec": 0, 00:05:12.881 "rw_mbytes_per_sec": 0, 00:05:12.881 "r_mbytes_per_sec": 0, 00:05:12.881 "w_mbytes_per_sec": 0 00:05:12.881 }, 00:05:12.881 "claimed": true, 00:05:12.881 "claim_type": "exclusive_write", 00:05:12.881 "zoned": false, 00:05:12.881 "supported_io_types": { 00:05:12.881 "read": true, 00:05:12.881 "write": true, 00:05:12.881 "unmap": true, 00:05:12.881 "flush": true, 00:05:12.881 "reset": true, 00:05:12.881 "nvme_admin": false, 00:05:12.881 "nvme_io": false, 00:05:12.881 "nvme_io_md": false, 00:05:12.881 "write_zeroes": true, 00:05:12.881 "zcopy": true, 00:05:12.881 "get_zone_info": false, 00:05:12.881 "zone_management": false, 00:05:12.881 "zone_append": false, 00:05:12.881 "compare": false, 00:05:12.881 "compare_and_write": false, 00:05:12.881 "abort": true, 00:05:12.881 "seek_hole": false, 00:05:12.881 "seek_data": false, 00:05:12.881 "copy": true, 00:05:12.881 "nvme_iov_md": false 00:05:12.881 }, 00:05:12.881 "memory_domains": [ 00:05:12.881 { 00:05:12.881 "dma_device_id": "system", 00:05:12.881 "dma_device_type": 1 00:05:12.881 }, 00:05:12.881 { 00:05:12.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.881 "dma_device_type": 2 00:05:12.881 } 00:05:12.881 ], 00:05:12.881 "driver_specific": {} 00:05:12.881 }, 00:05:12.881 { 00:05:12.881 "name": "Passthru0", 00:05:12.881 "aliases": [ 00:05:12.881 "c3f3260b-b371-58a3-9de3-e15242db276e" 00:05:12.881 ], 00:05:12.881 "product_name": "passthru", 00:05:12.881 "block_size": 512, 00:05:12.881 "num_blocks": 16384, 00:05:12.881 "uuid": "c3f3260b-b371-58a3-9de3-e15242db276e", 00:05:12.881 "assigned_rate_limits": { 00:05:12.881 "rw_ios_per_sec": 0, 00:05:12.881 "rw_mbytes_per_sec": 0, 00:05:12.881 "r_mbytes_per_sec": 0, 00:05:12.881 "w_mbytes_per_sec": 0 00:05:12.881 }, 00:05:12.881 "claimed": false, 00:05:12.881 "zoned": false, 00:05:12.881 "supported_io_types": { 00:05:12.881 "read": true, 00:05:12.881 "write": true, 00:05:12.881 "unmap": true, 00:05:12.881 "flush": true, 00:05:12.881 "reset": true, 00:05:12.881 "nvme_admin": false, 00:05:12.881 "nvme_io": false, 00:05:12.881 "nvme_io_md": false, 00:05:12.881 "write_zeroes": true, 00:05:12.881 "zcopy": true, 00:05:12.881 "get_zone_info": false, 00:05:12.881 "zone_management": false, 00:05:12.881 "zone_append": false, 00:05:12.881 "compare": false, 00:05:12.881 "compare_and_write": false, 00:05:12.881 "abort": true, 00:05:12.881 "seek_hole": false, 00:05:12.881 "seek_data": false, 00:05:12.881 "copy": true, 00:05:12.881 "nvme_iov_md": false 00:05:12.881 }, 00:05:12.881 "memory_domains": [ 00:05:12.881 { 00:05:12.881 "dma_device_id": "system", 00:05:12.881 "dma_device_type": 1 00:05:12.881 }, 00:05:12.881 { 00:05:12.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.881 "dma_device_type": 2 00:05:12.881 } 00:05:12.881 ], 00:05:12.881 "driver_specific": { 00:05:12.881 "passthru": { 00:05:12.881 "name": "Passthru0", 00:05:12.881 "base_bdev_name": "Malloc0" 00:05:12.881 } 00:05:12.881 } 00:05:12.881 } 00:05:12.881 ]' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.881 15:53:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.881 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.881 15:53:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.881 15:53:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.881 15:53:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.881 00:05:12.881 real 0m0.243s 00:05:12.881 user 0m0.127s 00:05:12.881 sys 0m0.030s 00:05:12.882 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.882 15:53:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.882 ************************************ 00:05:12.882 END TEST rpc_integrity 00:05:12.882 ************************************ 00:05:12.882 15:53:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.882 15:53:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.882 15:53:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.882 15:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.882 ************************************ 00:05:12.882 START TEST rpc_plugins 00:05:12.882 ************************************ 00:05:12.882 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:12.882 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.882 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.882 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.139 { 00:05:13.139 "name": "Malloc1", 00:05:13.139 "aliases": [ 00:05:13.139 "93ce6649-a9de-4673-a06e-6ce21df9e818" 00:05:13.139 ], 00:05:13.139 "product_name": "Malloc disk", 00:05:13.139 "block_size": 4096, 00:05:13.139 "num_blocks": 256, 00:05:13.139 "uuid": "93ce6649-a9de-4673-a06e-6ce21df9e818", 00:05:13.139 "assigned_rate_limits": { 00:05:13.139 "rw_ios_per_sec": 0, 00:05:13.139 "rw_mbytes_per_sec": 0, 00:05:13.139 "r_mbytes_per_sec": 0, 00:05:13.139 "w_mbytes_per_sec": 0 00:05:13.139 }, 00:05:13.139 "claimed": false, 00:05:13.139 "zoned": false, 00:05:13.139 "supported_io_types": { 00:05:13.139 "read": true, 00:05:13.139 "write": true, 00:05:13.139 "unmap": true, 00:05:13.139 "flush": true, 00:05:13.139 "reset": true, 00:05:13.139 "nvme_admin": false, 00:05:13.139 "nvme_io": false, 00:05:13.139 "nvme_io_md": false, 00:05:13.139 "write_zeroes": true, 00:05:13.139 "zcopy": true, 00:05:13.139 "get_zone_info": false, 00:05:13.139 "zone_management": false, 00:05:13.139 "zone_append": false, 00:05:13.139 "compare": false, 00:05:13.139 "compare_and_write": false, 00:05:13.139 "abort": true, 00:05:13.139 "seek_hole": false, 00:05:13.139 "seek_data": false, 00:05:13.139 "copy": true, 00:05:13.139 "nvme_iov_md": false 00:05:13.139 }, 00:05:13.139 "memory_domains": [ 00:05:13.139 { 00:05:13.139 "dma_device_id": "system", 00:05:13.139 "dma_device_type": 1 00:05:13.139 }, 00:05:13.139 { 00:05:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.139 "dma_device_type": 2 00:05:13.139 } 00:05:13.139 ], 00:05:13.139 "driver_specific": {} 00:05:13.139 } 00:05:13.139 ]' 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.139 15:53:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.139 00:05:13.139 real 0m0.118s 00:05:13.139 user 0m0.065s 00:05:13.139 sys 0m0.016s 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.139 15:53:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 ************************************ 00:05:13.139 END TEST rpc_plugins 00:05:13.139 ************************************ 00:05:13.139 15:53:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.139 15:53:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.139 15:53:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.139 15:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.139 ************************************ 00:05:13.139 START TEST rpc_trace_cmd_test 00:05:13.140 ************************************ 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.140 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57165", 00:05:13.140 "tpoint_group_mask": "0x8", 00:05:13.140 "iscsi_conn": { 00:05:13.140 "mask": "0x2", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "scsi": { 00:05:13.140 "mask": "0x4", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "bdev": { 00:05:13.140 "mask": "0x8", 00:05:13.140 "tpoint_mask": "0xffffffffffffffff" 00:05:13.140 }, 00:05:13.140 "nvmf_rdma": { 00:05:13.140 "mask": "0x10", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "nvmf_tcp": { 00:05:13.140 "mask": "0x20", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "ftl": { 00:05:13.140 "mask": "0x40", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "blobfs": { 00:05:13.140 "mask": "0x80", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "dsa": { 00:05:13.140 "mask": "0x200", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "thread": { 00:05:13.140 "mask": "0x400", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "nvme_pcie": { 00:05:13.140 "mask": "0x800", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "iaa": { 00:05:13.140 "mask": "0x1000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "nvme_tcp": { 00:05:13.140 "mask": "0x2000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "bdev_nvme": { 00:05:13.140 "mask": "0x4000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "sock": { 00:05:13.140 "mask": "0x8000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "blob": { 00:05:13.140 "mask": "0x10000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "bdev_raid": { 00:05:13.140 "mask": "0x20000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 }, 00:05:13.140 "scheduler": { 00:05:13.140 "mask": "0x40000", 00:05:13.140 "tpoint_mask": "0x0" 00:05:13.140 } 00:05:13.140 }' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.140 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.398 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.398 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.398 ************************************ 00:05:13.398 END TEST rpc_trace_cmd_test 00:05:13.398 ************************************ 00:05:13.398 15:53:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.398 00:05:13.398 real 0m0.163s 00:05:13.398 user 0m0.136s 00:05:13.398 sys 0m0.020s 00:05:13.398 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 15:53:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.398 15:53:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.398 15:53:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.398 15:53:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.398 15:53:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.398 15:53:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 ************************************ 00:05:13.398 START TEST rpc_daemon_integrity 00:05:13.398 ************************************ 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.398 { 00:05:13.398 "name": "Malloc2", 00:05:13.398 "aliases": [ 00:05:13.398 "dbe7c617-9ada-43ce-ba75-c1aac637783d" 00:05:13.398 ], 00:05:13.398 "product_name": "Malloc disk", 00:05:13.398 "block_size": 512, 00:05:13.398 "num_blocks": 16384, 00:05:13.398 "uuid": "dbe7c617-9ada-43ce-ba75-c1aac637783d", 00:05:13.398 "assigned_rate_limits": { 00:05:13.398 "rw_ios_per_sec": 0, 00:05:13.398 "rw_mbytes_per_sec": 0, 00:05:13.398 "r_mbytes_per_sec": 0, 00:05:13.398 "w_mbytes_per_sec": 0 00:05:13.398 }, 00:05:13.398 "claimed": false, 00:05:13.398 "zoned": false, 00:05:13.398 "supported_io_types": { 00:05:13.398 "read": true, 00:05:13.398 "write": true, 00:05:13.398 "unmap": true, 00:05:13.398 "flush": true, 00:05:13.398 "reset": true, 00:05:13.398 "nvme_admin": false, 00:05:13.398 "nvme_io": false, 00:05:13.398 "nvme_io_md": false, 00:05:13.398 "write_zeroes": true, 00:05:13.398 "zcopy": true, 00:05:13.398 "get_zone_info": false, 00:05:13.398 "zone_management": false, 00:05:13.398 "zone_append": false, 00:05:13.398 "compare": false, 00:05:13.398 "compare_and_write": false, 00:05:13.398 "abort": true, 00:05:13.398 "seek_hole": false, 00:05:13.398 "seek_data": false, 00:05:13.398 "copy": true, 00:05:13.398 "nvme_iov_md": false 00:05:13.398 }, 00:05:13.398 "memory_domains": [ 00:05:13.398 { 00:05:13.398 "dma_device_id": "system", 00:05:13.398 "dma_device_type": 1 00:05:13.398 }, 00:05:13.398 { 00:05:13.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.398 "dma_device_type": 2 00:05:13.398 } 00:05:13.398 ], 00:05:13.398 "driver_specific": {} 00:05:13.398 } 00:05:13.398 ]' 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 [2024-11-20 15:53:11.587018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.398 [2024-11-20 15:53:11.587079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.398 [2024-11-20 15:53:11.587099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:13.398 [2024-11-20 15:53:11.587110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.398 [2024-11-20 15:53:11.589290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.398 [2024-11-20 15:53:11.589329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.398 Passthru0 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.398 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.398 { 00:05:13.398 "name": "Malloc2", 00:05:13.398 "aliases": [ 00:05:13.398 "dbe7c617-9ada-43ce-ba75-c1aac637783d" 00:05:13.398 ], 00:05:13.398 "product_name": "Malloc disk", 00:05:13.398 "block_size": 512, 00:05:13.398 "num_blocks": 16384, 00:05:13.398 "uuid": "dbe7c617-9ada-43ce-ba75-c1aac637783d", 00:05:13.398 "assigned_rate_limits": { 00:05:13.398 "rw_ios_per_sec": 0, 00:05:13.398 "rw_mbytes_per_sec": 0, 00:05:13.398 "r_mbytes_per_sec": 0, 00:05:13.398 "w_mbytes_per_sec": 0 00:05:13.398 }, 00:05:13.398 "claimed": true, 00:05:13.398 "claim_type": "exclusive_write", 00:05:13.398 "zoned": false, 00:05:13.398 "supported_io_types": { 00:05:13.398 "read": true, 00:05:13.398 "write": true, 00:05:13.398 "unmap": true, 00:05:13.398 "flush": true, 00:05:13.398 "reset": true, 00:05:13.398 "nvme_admin": false, 00:05:13.398 "nvme_io": false, 00:05:13.398 "nvme_io_md": false, 00:05:13.398 "write_zeroes": true, 00:05:13.398 "zcopy": true, 00:05:13.398 "get_zone_info": false, 00:05:13.398 "zone_management": false, 00:05:13.398 "zone_append": false, 00:05:13.398 "compare": false, 00:05:13.398 "compare_and_write": false, 00:05:13.399 "abort": true, 00:05:13.399 "seek_hole": false, 00:05:13.399 "seek_data": false, 00:05:13.399 "copy": true, 00:05:13.399 "nvme_iov_md": false 00:05:13.399 }, 00:05:13.399 "memory_domains": [ 00:05:13.399 { 00:05:13.399 "dma_device_id": "system", 00:05:13.399 "dma_device_type": 1 00:05:13.399 }, 00:05:13.399 { 00:05:13.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.399 "dma_device_type": 2 00:05:13.399 } 00:05:13.399 ], 00:05:13.399 "driver_specific": {} 00:05:13.399 }, 00:05:13.399 { 00:05:13.399 "name": "Passthru0", 00:05:13.399 "aliases": [ 00:05:13.399 "30101514-b097-53e9-95f7-32a59a96a267" 00:05:13.399 ], 00:05:13.399 "product_name": "passthru", 00:05:13.399 "block_size": 512, 00:05:13.399 "num_blocks": 16384, 00:05:13.399 "uuid": "30101514-b097-53e9-95f7-32a59a96a267", 00:05:13.399 "assigned_rate_limits": { 00:05:13.399 "rw_ios_per_sec": 0, 00:05:13.399 "rw_mbytes_per_sec": 0, 00:05:13.399 "r_mbytes_per_sec": 0, 00:05:13.399 "w_mbytes_per_sec": 0 00:05:13.399 }, 00:05:13.399 "claimed": false, 00:05:13.399 "zoned": false, 00:05:13.399 "supported_io_types": { 00:05:13.399 "read": true, 00:05:13.399 "write": true, 00:05:13.399 "unmap": true, 00:05:13.399 "flush": true, 00:05:13.399 "reset": true, 00:05:13.399 "nvme_admin": false, 00:05:13.399 "nvme_io": false, 00:05:13.399 "nvme_io_md": false, 00:05:13.399 "write_zeroes": true, 00:05:13.399 "zcopy": true, 00:05:13.399 "get_zone_info": false, 00:05:13.399 "zone_management": false, 00:05:13.399 "zone_append": false, 00:05:13.399 "compare": false, 00:05:13.399 "compare_and_write": false, 00:05:13.399 "abort": true, 00:05:13.399 "seek_hole": false, 00:05:13.399 "seek_data": false, 00:05:13.399 "copy": true, 00:05:13.399 "nvme_iov_md": false 00:05:13.399 }, 00:05:13.399 "memory_domains": [ 00:05:13.399 { 00:05:13.399 "dma_device_id": "system", 00:05:13.399 "dma_device_type": 1 00:05:13.399 }, 00:05:13.399 { 00:05:13.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.399 "dma_device_type": 2 00:05:13.399 } 00:05:13.399 ], 00:05:13.399 "driver_specific": { 00:05:13.399 "passthru": { 00:05:13.399 "name": "Passthru0", 00:05:13.399 "base_bdev_name": "Malloc2" 00:05:13.399 } 00:05:13.399 } 00:05:13.399 } 00:05:13.399 ]' 00:05:13.399 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.655 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.656 ************************************ 00:05:13.656 END TEST rpc_daemon_integrity 00:05:13.656 ************************************ 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.656 00:05:13.656 real 0m0.244s 00:05:13.656 user 0m0.126s 00:05:13.656 sys 0m0.033s 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.656 15:53:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.656 15:53:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.656 15:53:11 rpc -- rpc/rpc.sh@84 -- # killprocess 57165 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 57165 ']' 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@958 -- # kill -0 57165 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57165 00:05:13.656 killing process with pid 57165 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57165' 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@973 -- # kill 57165 00:05:13.656 15:53:11 rpc -- common/autotest_common.sh@978 -- # wait 57165 00:05:15.554 00:05:15.554 real 0m3.541s 00:05:15.554 user 0m3.962s 00:05:15.554 sys 0m0.590s 00:05:15.554 15:53:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.554 ************************************ 00:05:15.554 END TEST rpc 00:05:15.554 ************************************ 00:05:15.554 15:53:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 15:53:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.554 15:53:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.554 15:53:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.554 15:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 ************************************ 00:05:15.554 START TEST skip_rpc 00:05:15.554 ************************************ 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.554 * Looking for test storage... 00:05:15.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.554 15:53:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.554 --rc genhtml_branch_coverage=1 00:05:15.554 --rc genhtml_function_coverage=1 00:05:15.554 --rc genhtml_legend=1 00:05:15.554 --rc geninfo_all_blocks=1 00:05:15.554 --rc geninfo_unexecuted_blocks=1 00:05:15.554 00:05:15.554 ' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.554 --rc genhtml_branch_coverage=1 00:05:15.554 --rc genhtml_function_coverage=1 00:05:15.554 --rc genhtml_legend=1 00:05:15.554 --rc geninfo_all_blocks=1 00:05:15.554 --rc geninfo_unexecuted_blocks=1 00:05:15.554 00:05:15.554 ' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.554 --rc genhtml_branch_coverage=1 00:05:15.554 --rc genhtml_function_coverage=1 00:05:15.554 --rc genhtml_legend=1 00:05:15.554 --rc geninfo_all_blocks=1 00:05:15.554 --rc geninfo_unexecuted_blocks=1 00:05:15.554 00:05:15.554 ' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.554 --rc genhtml_branch_coverage=1 00:05:15.554 --rc genhtml_function_coverage=1 00:05:15.554 --rc genhtml_legend=1 00:05:15.554 --rc geninfo_all_blocks=1 00:05:15.554 --rc geninfo_unexecuted_blocks=1 00:05:15.554 00:05:15.554 ' 00:05:15.554 15:53:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.554 15:53:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.554 15:53:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.554 15:53:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 ************************************ 00:05:15.554 START TEST skip_rpc 00:05:15.554 ************************************ 00:05:15.554 15:53:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:15.554 15:53:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57377 00:05:15.554 15:53:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.554 15:53:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.554 15:53:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.554 [2024-11-20 15:53:13.580407] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:15.554 [2024-11-20 15:53:13.580779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57377 ] 00:05:15.554 [2024-11-20 15:53:13.744497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.812 [2024-11-20 15:53:13.862751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57377 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57377 ']' 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57377 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57377 00:05:21.069 killing process with pid 57377 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57377' 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57377 00:05:21.069 15:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57377 00:05:21.639 00:05:21.639 real 0m6.375s 00:05:21.639 user 0m5.917s 00:05:21.639 sys 0m0.343s 00:05:21.639 15:53:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.639 ************************************ 00:05:21.639 END TEST skip_rpc 00:05:21.639 ************************************ 00:05:21.639 15:53:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.896 15:53:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.896 15:53:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.896 15:53:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.896 15:53:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.896 ************************************ 00:05:21.896 START TEST skip_rpc_with_json 00:05:21.896 ************************************ 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57471 00:05:21.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57471 00:05:21.896 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57471 ']' 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.897 15:53:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.897 [2024-11-20 15:53:20.008710] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:21.897 [2024-11-20 15:53:20.008826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:05:22.153 [2024-11-20 15:53:20.161841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.153 [2024-11-20 15:53:20.278026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.717 [2024-11-20 15:53:20.929569] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.717 request: 00:05:22.717 { 00:05:22.717 "trtype": "tcp", 00:05:22.717 "method": "nvmf_get_transports", 00:05:22.717 "req_id": 1 00:05:22.717 } 00:05:22.717 Got JSON-RPC error response 00:05:22.717 response: 00:05:22.717 { 00:05:22.717 "code": -19, 00:05:22.717 "message": "No such device" 00:05:22.717 } 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.717 [2024-11-20 15:53:20.937678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.717 15:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.977 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.977 15:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.977 { 00:05:22.977 "subsystems": [ 00:05:22.977 { 00:05:22.977 "subsystem": "fsdev", 00:05:22.977 "config": [ 00:05:22.977 { 00:05:22.977 "method": "fsdev_set_opts", 00:05:22.977 "params": { 00:05:22.977 "fsdev_io_pool_size": 65535, 00:05:22.977 "fsdev_io_cache_size": 256 00:05:22.977 } 00:05:22.977 } 00:05:22.977 ] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "keyring", 00:05:22.977 "config": [] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "iobuf", 00:05:22.977 "config": [ 00:05:22.977 { 00:05:22.977 "method": "iobuf_set_options", 00:05:22.977 "params": { 00:05:22.977 "small_pool_count": 8192, 00:05:22.977 "large_pool_count": 1024, 00:05:22.977 "small_bufsize": 8192, 00:05:22.977 "large_bufsize": 135168, 00:05:22.977 "enable_numa": false 00:05:22.977 } 00:05:22.977 } 00:05:22.977 ] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "sock", 00:05:22.977 "config": [ 00:05:22.977 { 00:05:22.977 "method": "sock_set_default_impl", 00:05:22.977 "params": { 00:05:22.977 "impl_name": "posix" 00:05:22.977 } 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "method": "sock_impl_set_options", 00:05:22.977 "params": { 00:05:22.977 "impl_name": "ssl", 00:05:22.977 "recv_buf_size": 4096, 00:05:22.977 "send_buf_size": 4096, 00:05:22.977 "enable_recv_pipe": true, 00:05:22.977 "enable_quickack": false, 00:05:22.977 "enable_placement_id": 0, 00:05:22.977 "enable_zerocopy_send_server": true, 00:05:22.977 "enable_zerocopy_send_client": false, 00:05:22.977 "zerocopy_threshold": 0, 00:05:22.977 "tls_version": 0, 00:05:22.977 "enable_ktls": false 00:05:22.977 } 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "method": "sock_impl_set_options", 00:05:22.977 "params": { 00:05:22.977 "impl_name": "posix", 00:05:22.977 "recv_buf_size": 2097152, 00:05:22.977 "send_buf_size": 2097152, 00:05:22.977 "enable_recv_pipe": true, 00:05:22.977 "enable_quickack": false, 00:05:22.977 "enable_placement_id": 0, 00:05:22.977 "enable_zerocopy_send_server": true, 00:05:22.977 "enable_zerocopy_send_client": false, 00:05:22.977 "zerocopy_threshold": 0, 00:05:22.977 "tls_version": 0, 00:05:22.977 "enable_ktls": false 00:05:22.977 } 00:05:22.977 } 00:05:22.977 ] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "vmd", 00:05:22.977 "config": [] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "accel", 00:05:22.977 "config": [ 00:05:22.977 { 00:05:22.977 "method": "accel_set_options", 00:05:22.977 "params": { 00:05:22.977 "small_cache_size": 128, 00:05:22.977 "large_cache_size": 16, 00:05:22.977 "task_count": 2048, 00:05:22.977 "sequence_count": 2048, 00:05:22.977 "buf_count": 2048 00:05:22.977 } 00:05:22.977 } 00:05:22.977 ] 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "subsystem": "bdev", 00:05:22.977 "config": [ 00:05:22.977 { 00:05:22.977 "method": "bdev_set_options", 00:05:22.977 "params": { 00:05:22.977 "bdev_io_pool_size": 65535, 00:05:22.977 "bdev_io_cache_size": 256, 00:05:22.977 "bdev_auto_examine": true, 00:05:22.977 "iobuf_small_cache_size": 128, 00:05:22.977 "iobuf_large_cache_size": 16 00:05:22.977 } 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "method": "bdev_raid_set_options", 00:05:22.977 "params": { 00:05:22.977 "process_window_size_kb": 1024, 00:05:22.977 "process_max_bandwidth_mb_sec": 0 00:05:22.977 } 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "method": "bdev_iscsi_set_options", 00:05:22.977 "params": { 00:05:22.977 "timeout_sec": 30 00:05:22.977 } 00:05:22.977 }, 00:05:22.977 { 00:05:22.977 "method": "bdev_nvme_set_options", 00:05:22.977 "params": { 00:05:22.977 "action_on_timeout": "none", 00:05:22.977 "timeout_us": 0, 00:05:22.977 "timeout_admin_us": 0, 00:05:22.977 "keep_alive_timeout_ms": 10000, 00:05:22.977 "arbitration_burst": 0, 00:05:22.977 "low_priority_weight": 0, 00:05:22.977 "medium_priority_weight": 0, 00:05:22.977 "high_priority_weight": 0, 00:05:22.977 "nvme_adminq_poll_period_us": 10000, 00:05:22.977 "nvme_ioq_poll_period_us": 0, 00:05:22.977 "io_queue_requests": 0, 00:05:22.977 "delay_cmd_submit": true, 00:05:22.977 "transport_retry_count": 4, 00:05:22.977 "bdev_retry_count": 3, 00:05:22.977 "transport_ack_timeout": 0, 00:05:22.977 "ctrlr_loss_timeout_sec": 0, 00:05:22.977 "reconnect_delay_sec": 0, 00:05:22.977 "fast_io_fail_timeout_sec": 0, 00:05:22.977 "disable_auto_failback": false, 00:05:22.977 "generate_uuids": false, 00:05:22.977 "transport_tos": 0, 00:05:22.977 "nvme_error_stat": false, 00:05:22.977 "rdma_srq_size": 0, 00:05:22.977 "io_path_stat": false, 00:05:22.977 "allow_accel_sequence": false, 00:05:22.977 "rdma_max_cq_size": 0, 00:05:22.977 "rdma_cm_event_timeout_ms": 0, 00:05:22.978 "dhchap_digests": [ 00:05:22.978 "sha256", 00:05:22.978 "sha384", 00:05:22.978 "sha512" 00:05:22.978 ], 00:05:22.978 "dhchap_dhgroups": [ 00:05:22.978 "null", 00:05:22.978 "ffdhe2048", 00:05:22.978 "ffdhe3072", 00:05:22.978 "ffdhe4096", 00:05:22.978 "ffdhe6144", 00:05:22.978 "ffdhe8192" 00:05:22.978 ] 00:05:22.978 } 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "method": "bdev_nvme_set_hotplug", 00:05:22.978 "params": { 00:05:22.978 "period_us": 100000, 00:05:22.978 "enable": false 00:05:22.978 } 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "method": "bdev_wait_for_examine" 00:05:22.978 } 00:05:22.978 ] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "scsi", 00:05:22.978 "config": null 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "scheduler", 00:05:22.978 "config": [ 00:05:22.978 { 00:05:22.978 "method": "framework_set_scheduler", 00:05:22.978 "params": { 00:05:22.978 "name": "static" 00:05:22.978 } 00:05:22.978 } 00:05:22.978 ] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "vhost_scsi", 00:05:22.978 "config": [] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "vhost_blk", 00:05:22.978 "config": [] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "ublk", 00:05:22.978 "config": [] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "nbd", 00:05:22.978 "config": [] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "nvmf", 00:05:22.978 "config": [ 00:05:22.978 { 00:05:22.978 "method": "nvmf_set_config", 00:05:22.978 "params": { 00:05:22.978 "discovery_filter": "match_any", 00:05:22.978 "admin_cmd_passthru": { 00:05:22.978 "identify_ctrlr": false 00:05:22.978 }, 00:05:22.978 "dhchap_digests": [ 00:05:22.978 "sha256", 00:05:22.978 "sha384", 00:05:22.978 "sha512" 00:05:22.978 ], 00:05:22.978 "dhchap_dhgroups": [ 00:05:22.978 "null", 00:05:22.978 "ffdhe2048", 00:05:22.978 "ffdhe3072", 00:05:22.978 "ffdhe4096", 00:05:22.978 "ffdhe6144", 00:05:22.978 "ffdhe8192" 00:05:22.978 ] 00:05:22.978 } 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "method": "nvmf_set_max_subsystems", 00:05:22.978 "params": { 00:05:22.978 "max_subsystems": 1024 00:05:22.978 } 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "method": "nvmf_set_crdt", 00:05:22.978 "params": { 00:05:22.978 "crdt1": 0, 00:05:22.978 "crdt2": 0, 00:05:22.978 "crdt3": 0 00:05:22.978 } 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "method": "nvmf_create_transport", 00:05:22.978 "params": { 00:05:22.978 "trtype": "TCP", 00:05:22.978 "max_queue_depth": 128, 00:05:22.978 "max_io_qpairs_per_ctrlr": 127, 00:05:22.978 "in_capsule_data_size": 4096, 00:05:22.978 "max_io_size": 131072, 00:05:22.978 "io_unit_size": 131072, 00:05:22.978 "max_aq_depth": 128, 00:05:22.978 "num_shared_buffers": 511, 00:05:22.978 "buf_cache_size": 4294967295, 00:05:22.978 "dif_insert_or_strip": false, 00:05:22.978 "zcopy": false, 00:05:22.978 "c2h_success": true, 00:05:22.978 "sock_priority": 0, 00:05:22.978 "abort_timeout_sec": 1, 00:05:22.978 "ack_timeout": 0, 00:05:22.978 "data_wr_pool_size": 0 00:05:22.978 } 00:05:22.978 } 00:05:22.978 ] 00:05:22.978 }, 00:05:22.978 { 00:05:22.978 "subsystem": "iscsi", 00:05:22.978 "config": [ 00:05:22.978 { 00:05:22.978 "method": "iscsi_set_options", 00:05:22.978 "params": { 00:05:22.978 "node_base": "iqn.2016-06.io.spdk", 00:05:22.978 "max_sessions": 128, 00:05:22.978 "max_connections_per_session": 2, 00:05:22.978 "max_queue_depth": 64, 00:05:22.978 "default_time2wait": 2, 00:05:22.978 "default_time2retain": 20, 00:05:22.978 "first_burst_length": 8192, 00:05:22.978 "immediate_data": true, 00:05:22.978 "allow_duplicated_isid": false, 00:05:22.978 "error_recovery_level": 0, 00:05:22.978 "nop_timeout": 60, 00:05:22.978 "nop_in_interval": 30, 00:05:22.978 "disable_chap": false, 00:05:22.978 "require_chap": false, 00:05:22.978 "mutual_chap": false, 00:05:22.978 "chap_group": 0, 00:05:22.978 "max_large_datain_per_connection": 64, 00:05:22.978 "max_r2t_per_connection": 4, 00:05:22.978 "pdu_pool_size": 36864, 00:05:22.978 "immediate_data_pool_size": 16384, 00:05:22.978 "data_out_pool_size": 2048 00:05:22.978 } 00:05:22.978 } 00:05:22.978 ] 00:05:22.978 } 00:05:22.978 ] 00:05:22.978 } 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57471 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57471 ']' 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57471 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57471 00:05:22.978 killing process with pid 57471 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57471' 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57471 00:05:22.978 15:53:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57471 00:05:24.883 15:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57515 00:05:24.883 15:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.883 15:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57515 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57515 ']' 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57515 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57515 00:05:30.144 killing process with pid 57515 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57515' 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57515 00:05:30.144 15:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57515 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.087 ************************************ 00:05:31.087 END TEST skip_rpc_with_json 00:05:31.087 ************************************ 00:05:31.087 00:05:31.087 real 0m9.275s 00:05:31.087 user 0m8.805s 00:05:31.087 sys 0m0.685s 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.087 15:53:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.087 15:53:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.087 15:53:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.087 15:53:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.087 ************************************ 00:05:31.087 START TEST skip_rpc_with_delay 00:05:31.087 ************************************ 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.087 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.344 [2024-11-20 15:53:29.343619] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.344 ************************************ 00:05:31.344 END TEST skip_rpc_with_delay 00:05:31.344 ************************************ 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.344 00:05:31.344 real 0m0.131s 00:05:31.344 user 0m0.050s 00:05:31.344 sys 0m0.079s 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.344 15:53:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 15:53:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.344 15:53:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.344 15:53:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.344 15:53:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.344 15:53:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.344 15:53:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 ************************************ 00:05:31.344 START TEST exit_on_failed_rpc_init 00:05:31.344 ************************************ 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57638 00:05:31.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57638 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57638 ']' 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.344 15:53:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 [2024-11-20 15:53:29.529016] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:31.344 [2024-11-20 15:53:29.529179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:05:31.599 [2024-11-20 15:53:29.690066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.599 [2024-11-20 15:53:29.792436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.163 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.420 [2024-11-20 15:53:30.483125] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:32.420 [2024-11-20 15:53:30.483275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57656 ] 00:05:32.420 [2024-11-20 15:53:30.644558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.678 [2024-11-20 15:53:30.745700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.678 [2024-11-20 15:53:30.745804] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.678 [2024-11-20 15:53:30.745818] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.678 [2024-11-20 15:53:30.745832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57638 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57638 ']' 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57638 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57638 00:05:32.936 killing process with pid 57638 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57638' 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57638 00:05:32.936 15:53:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57638 00:05:34.347 00:05:34.347 real 0m3.068s 00:05:34.347 user 0m3.366s 00:05:34.347 sys 0m0.453s 00:05:34.347 15:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.347 15:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.347 ************************************ 00:05:34.347 END TEST exit_on_failed_rpc_init 00:05:34.347 ************************************ 00:05:34.347 15:53:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.347 00:05:34.347 real 0m19.208s 00:05:34.347 user 0m18.285s 00:05:34.347 sys 0m1.755s 00:05:34.347 15:53:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.347 ************************************ 00:05:34.347 END TEST skip_rpc 00:05:34.347 ************************************ 00:05:34.347 15:53:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.347 15:53:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.347 15:53:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.347 15:53:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.347 15:53:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.347 ************************************ 00:05:34.347 START TEST rpc_client 00:05:34.347 ************************************ 00:05:34.347 15:53:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.605 * Looking for test storage... 00:05:34.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.605 15:53:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.605 --rc genhtml_branch_coverage=1 00:05:34.605 --rc genhtml_function_coverage=1 00:05:34.605 --rc genhtml_legend=1 00:05:34.605 --rc geninfo_all_blocks=1 00:05:34.605 --rc geninfo_unexecuted_blocks=1 00:05:34.605 00:05:34.605 ' 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.605 --rc genhtml_branch_coverage=1 00:05:34.605 --rc genhtml_function_coverage=1 00:05:34.605 --rc genhtml_legend=1 00:05:34.605 --rc geninfo_all_blocks=1 00:05:34.605 --rc geninfo_unexecuted_blocks=1 00:05:34.605 00:05:34.605 ' 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.605 --rc genhtml_branch_coverage=1 00:05:34.605 --rc genhtml_function_coverage=1 00:05:34.605 --rc genhtml_legend=1 00:05:34.605 --rc geninfo_all_blocks=1 00:05:34.605 --rc geninfo_unexecuted_blocks=1 00:05:34.605 00:05:34.605 ' 00:05:34.605 15:53:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.605 --rc genhtml_branch_coverage=1 00:05:34.605 --rc genhtml_function_coverage=1 00:05:34.605 --rc genhtml_legend=1 00:05:34.605 --rc geninfo_all_blocks=1 00:05:34.605 --rc geninfo_unexecuted_blocks=1 00:05:34.605 00:05:34.605 ' 00:05:34.606 15:53:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.606 OK 00:05:34.606 15:53:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.606 00:05:34.606 real 0m0.193s 00:05:34.606 user 0m0.106s 00:05:34.606 sys 0m0.093s 00:05:34.606 15:53:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.606 15:53:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.606 ************************************ 00:05:34.606 END TEST rpc_client 00:05:34.606 ************************************ 00:05:34.606 15:53:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.606 15:53:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.606 15:53:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.606 15:53:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.606 ************************************ 00:05:34.606 START TEST json_config 00:05:34.606 ************************************ 00:05:34.606 15:53:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.864 15:53:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.864 15:53:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.864 15:53:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.864 15:53:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.864 15:53:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.864 15:53:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:34.864 15:53:32 json_config -- scripts/common.sh@345 -- # : 1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.864 15:53:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.864 15:53:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@353 -- # local d=1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.864 15:53:32 json_config -- scripts/common.sh@355 -- # echo 1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.864 15:53:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@353 -- # local d=2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.864 15:53:32 json_config -- scripts/common.sh@355 -- # echo 2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.864 15:53:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.864 15:53:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.864 15:53:32 json_config -- scripts/common.sh@368 -- # return 0 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.864 --rc genhtml_branch_coverage=1 00:05:34.864 --rc genhtml_function_coverage=1 00:05:34.864 --rc genhtml_legend=1 00:05:34.864 --rc geninfo_all_blocks=1 00:05:34.864 --rc geninfo_unexecuted_blocks=1 00:05:34.864 00:05:34.864 ' 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.864 --rc genhtml_branch_coverage=1 00:05:34.864 --rc genhtml_function_coverage=1 00:05:34.864 --rc genhtml_legend=1 00:05:34.864 --rc geninfo_all_blocks=1 00:05:34.864 --rc geninfo_unexecuted_blocks=1 00:05:34.864 00:05:34.864 ' 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.864 --rc genhtml_branch_coverage=1 00:05:34.864 --rc genhtml_function_coverage=1 00:05:34.864 --rc genhtml_legend=1 00:05:34.864 --rc geninfo_all_blocks=1 00:05:34.864 --rc geninfo_unexecuted_blocks=1 00:05:34.864 00:05:34.864 ' 00:05:34.864 15:53:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.864 --rc genhtml_branch_coverage=1 00:05:34.864 --rc genhtml_function_coverage=1 00:05:34.864 --rc genhtml_legend=1 00:05:34.864 --rc geninfo_all_blocks=1 00:05:34.864 --rc geninfo_unexecuted_blocks=1 00:05:34.864 00:05:34.864 ' 00:05:34.864 15:53:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:05:34.864 15:53:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.865 15:53:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.865 15:53:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.865 15:53:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.865 15:53:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.865 15:53:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.865 15:53:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.865 15:53:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.865 15:53:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:34.865 15:53:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@51 -- # : 0 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.865 15:53:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:34.865 WARNING: No tests are enabled so not running JSON configuration tests 00:05:34.865 15:53:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:34.865 00:05:34.865 real 0m0.150s 00:05:34.865 user 0m0.096s 00:05:34.865 sys 0m0.053s 00:05:34.865 15:53:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.865 15:53:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.865 ************************************ 00:05:34.865 END TEST json_config 00:05:34.865 ************************************ 00:05:34.865 15:53:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.865 15:53:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.865 15:53:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.865 15:53:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.865 ************************************ 00:05:34.865 START TEST json_config_extra_key 00:05:34.865 ************************************ 00:05:34.865 15:53:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.865 15:53:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.865 15:53:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.865 15:53:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.173 --rc genhtml_branch_coverage=1 00:05:35.173 --rc genhtml_function_coverage=1 00:05:35.173 --rc genhtml_legend=1 00:05:35.173 --rc geninfo_all_blocks=1 00:05:35.173 --rc geninfo_unexecuted_blocks=1 00:05:35.173 00:05:35.173 ' 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.173 --rc genhtml_branch_coverage=1 00:05:35.173 --rc genhtml_function_coverage=1 00:05:35.173 --rc genhtml_legend=1 00:05:35.173 --rc geninfo_all_blocks=1 00:05:35.173 --rc geninfo_unexecuted_blocks=1 00:05:35.173 00:05:35.173 ' 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.173 --rc genhtml_branch_coverage=1 00:05:35.173 --rc genhtml_function_coverage=1 00:05:35.173 --rc genhtml_legend=1 00:05:35.173 --rc geninfo_all_blocks=1 00:05:35.173 --rc geninfo_unexecuted_blocks=1 00:05:35.173 00:05:35.173 ' 00:05:35.173 15:53:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.173 --rc genhtml_branch_coverage=1 00:05:35.173 --rc genhtml_function_coverage=1 00:05:35.173 --rc genhtml_legend=1 00:05:35.173 --rc geninfo_all_blocks=1 00:05:35.173 --rc geninfo_unexecuted_blocks=1 00:05:35.173 00:05:35.173 ' 00:05:35.173 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dd8629-8fab-42cc-a050-2b8fda752ad8 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.173 15:53:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.173 15:53:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.173 15:53:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.173 15:53:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.173 15:53:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.173 15:53:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.173 15:53:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.174 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.174 15:53:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.174 INFO: launching applications... 00:05:35.174 15:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57849 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.174 Waiting for target to run... 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57849 /var/tmp/spdk_tgt.sock 00:05:35.174 15:53:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57849 ']' 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.174 15:53:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.174 [2024-11-20 15:53:33.264461] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:35.174 [2024-11-20 15:53:33.264792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57849 ] 00:05:35.433 [2024-11-20 15:53:33.647646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.691 [2024-11-20 15:53:33.741151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.949 15:53:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.949 15:53:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:35.949 00:05:35.949 INFO: shutting down applications... 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.949 15:53:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.949 15:53:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57849 ]] 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57849 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57849 00:05:35.949 15:53:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.516 15:53:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.516 15:53:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.516 15:53:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57849 00:05:36.516 15:53:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.081 15:53:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.081 15:53:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.081 15:53:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57849 00:05:37.081 15:53:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.646 15:53:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.646 15:53:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.646 15:53:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57849 00:05:37.646 15:53:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.213 SPDK target shutdown done 00:05:38.213 Success 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57849 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.213 15:53:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.213 15:53:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:38.213 00:05:38.213 real 0m3.155s 00:05:38.213 user 0m2.622s 00:05:38.213 sys 0m0.466s 00:05:38.213 ************************************ 00:05:38.213 END TEST json_config_extra_key 00:05:38.213 ************************************ 00:05:38.213 15:53:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.213 15:53:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.213 15:53:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.213 15:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.213 15:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.213 15:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.213 ************************************ 00:05:38.213 START TEST alias_rpc 00:05:38.213 ************************************ 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.214 * Looking for test storage... 00:05:38.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.214 15:53:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 15:53:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.214 15:53:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57948 00:05:38.214 15:53:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57948 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57948 ']' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.214 15:53:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.214 15:53:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.214 [2024-11-20 15:53:36.443984] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:38.214 [2024-11-20 15:53:36.444125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57948 ] 00:05:38.473 [2024-11-20 15:53:36.605298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.473 [2024-11-20 15:53:36.710684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.406 15:53:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.406 15:53:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57948 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57948 ']' 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57948 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57948 00:05:39.406 killing process with pid 57948 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57948' 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 57948 00:05:39.406 15:53:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 57948 00:05:41.305 ************************************ 00:05:41.305 END TEST alias_rpc 00:05:41.305 ************************************ 00:05:41.305 00:05:41.305 real 0m3.014s 00:05:41.305 user 0m3.050s 00:05:41.305 sys 0m0.442s 00:05:41.305 15:53:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.305 15:53:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 15:53:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:41.305 15:53:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.305 15:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.305 15:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.305 15:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 ************************************ 00:05:41.305 START TEST spdkcli_tcp 00:05:41.305 ************************************ 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.305 * Looking for test storage... 00:05:41.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.305 15:53:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.305 --rc genhtml_branch_coverage=1 00:05:41.305 --rc genhtml_function_coverage=1 00:05:41.305 --rc genhtml_legend=1 00:05:41.305 --rc geninfo_all_blocks=1 00:05:41.305 --rc geninfo_unexecuted_blocks=1 00:05:41.305 00:05:41.305 ' 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.305 --rc genhtml_branch_coverage=1 00:05:41.305 --rc genhtml_function_coverage=1 00:05:41.305 --rc genhtml_legend=1 00:05:41.305 --rc geninfo_all_blocks=1 00:05:41.305 --rc geninfo_unexecuted_blocks=1 00:05:41.305 00:05:41.305 ' 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.305 --rc genhtml_branch_coverage=1 00:05:41.305 --rc genhtml_function_coverage=1 00:05:41.305 --rc genhtml_legend=1 00:05:41.305 --rc geninfo_all_blocks=1 00:05:41.305 --rc geninfo_unexecuted_blocks=1 00:05:41.305 00:05:41.305 ' 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.305 --rc genhtml_branch_coverage=1 00:05:41.305 --rc genhtml_function_coverage=1 00:05:41.305 --rc genhtml_legend=1 00:05:41.305 --rc geninfo_all_blocks=1 00:05:41.305 --rc geninfo_unexecuted_blocks=1 00:05:41.305 00:05:41.305 ' 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58044 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58044 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58044 ']' 00:05:41.305 15:53:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.305 15:53:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 [2024-11-20 15:53:39.502001] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:41.305 [2024-11-20 15:53:39.502124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:05:41.561 [2024-11-20 15:53:39.658114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.561 [2024-11-20 15:53:39.746209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.561 [2024-11-20 15:53:39.746373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.125 15:53:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.125 15:53:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:42.125 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58055 00:05:42.125 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.125 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.383 [ 00:05:42.383 "bdev_malloc_delete", 00:05:42.383 "bdev_malloc_create", 00:05:42.383 "bdev_null_resize", 00:05:42.383 "bdev_null_delete", 00:05:42.383 "bdev_null_create", 00:05:42.383 "bdev_nvme_cuse_unregister", 00:05:42.383 "bdev_nvme_cuse_register", 00:05:42.383 "bdev_opal_new_user", 00:05:42.383 "bdev_opal_set_lock_state", 00:05:42.383 "bdev_opal_delete", 00:05:42.383 "bdev_opal_get_info", 00:05:42.383 "bdev_opal_create", 00:05:42.383 "bdev_nvme_opal_revert", 00:05:42.383 "bdev_nvme_opal_init", 00:05:42.383 "bdev_nvme_send_cmd", 00:05:42.383 "bdev_nvme_set_keys", 00:05:42.383 "bdev_nvme_get_path_iostat", 00:05:42.383 "bdev_nvme_get_mdns_discovery_info", 00:05:42.383 "bdev_nvme_stop_mdns_discovery", 00:05:42.383 "bdev_nvme_start_mdns_discovery", 00:05:42.383 "bdev_nvme_set_multipath_policy", 00:05:42.383 "bdev_nvme_set_preferred_path", 00:05:42.383 "bdev_nvme_get_io_paths", 00:05:42.383 "bdev_nvme_remove_error_injection", 00:05:42.383 "bdev_nvme_add_error_injection", 00:05:42.383 "bdev_nvme_get_discovery_info", 00:05:42.383 "bdev_nvme_stop_discovery", 00:05:42.383 "bdev_nvme_start_discovery", 00:05:42.383 "bdev_nvme_get_controller_health_info", 00:05:42.383 "bdev_nvme_disable_controller", 00:05:42.383 "bdev_nvme_enable_controller", 00:05:42.383 "bdev_nvme_reset_controller", 00:05:42.383 "bdev_nvme_get_transport_statistics", 00:05:42.383 "bdev_nvme_apply_firmware", 00:05:42.383 "bdev_nvme_detach_controller", 00:05:42.383 "bdev_nvme_get_controllers", 00:05:42.383 "bdev_nvme_attach_controller", 00:05:42.383 "bdev_nvme_set_hotplug", 00:05:42.383 "bdev_nvme_set_options", 00:05:42.383 "bdev_passthru_delete", 00:05:42.383 "bdev_passthru_create", 00:05:42.383 "bdev_lvol_set_parent_bdev", 00:05:42.383 "bdev_lvol_set_parent", 00:05:42.383 "bdev_lvol_check_shallow_copy", 00:05:42.383 "bdev_lvol_start_shallow_copy", 00:05:42.383 "bdev_lvol_grow_lvstore", 00:05:42.383 "bdev_lvol_get_lvols", 00:05:42.383 "bdev_lvol_get_lvstores", 00:05:42.383 "bdev_lvol_delete", 00:05:42.383 "bdev_lvol_set_read_only", 00:05:42.383 "bdev_lvol_resize", 00:05:42.383 "bdev_lvol_decouple_parent", 00:05:42.383 "bdev_lvol_inflate", 00:05:42.383 "bdev_lvol_rename", 00:05:42.383 "bdev_lvol_clone_bdev", 00:05:42.383 "bdev_lvol_clone", 00:05:42.383 "bdev_lvol_snapshot", 00:05:42.383 "bdev_lvol_create", 00:05:42.383 "bdev_lvol_delete_lvstore", 00:05:42.383 "bdev_lvol_rename_lvstore", 00:05:42.383 "bdev_lvol_create_lvstore", 00:05:42.383 "bdev_raid_set_options", 00:05:42.383 "bdev_raid_remove_base_bdev", 00:05:42.383 "bdev_raid_add_base_bdev", 00:05:42.383 "bdev_raid_delete", 00:05:42.383 "bdev_raid_create", 00:05:42.383 "bdev_raid_get_bdevs", 00:05:42.383 "bdev_error_inject_error", 00:05:42.383 "bdev_error_delete", 00:05:42.383 "bdev_error_create", 00:05:42.383 "bdev_split_delete", 00:05:42.383 "bdev_split_create", 00:05:42.383 "bdev_delay_delete", 00:05:42.383 "bdev_delay_create", 00:05:42.383 "bdev_delay_update_latency", 00:05:42.383 "bdev_zone_block_delete", 00:05:42.383 "bdev_zone_block_create", 00:05:42.383 "blobfs_create", 00:05:42.383 "blobfs_detect", 00:05:42.383 "blobfs_set_cache_size", 00:05:42.383 "bdev_xnvme_delete", 00:05:42.383 "bdev_xnvme_create", 00:05:42.383 "bdev_aio_delete", 00:05:42.383 "bdev_aio_rescan", 00:05:42.383 "bdev_aio_create", 00:05:42.383 "bdev_ftl_set_property", 00:05:42.383 "bdev_ftl_get_properties", 00:05:42.383 "bdev_ftl_get_stats", 00:05:42.383 "bdev_ftl_unmap", 00:05:42.383 "bdev_ftl_unload", 00:05:42.383 "bdev_ftl_delete", 00:05:42.383 "bdev_ftl_load", 00:05:42.383 "bdev_ftl_create", 00:05:42.383 "bdev_virtio_attach_controller", 00:05:42.383 "bdev_virtio_scsi_get_devices", 00:05:42.383 "bdev_virtio_detach_controller", 00:05:42.383 "bdev_virtio_blk_set_hotplug", 00:05:42.383 "bdev_iscsi_delete", 00:05:42.383 "bdev_iscsi_create", 00:05:42.383 "bdev_iscsi_set_options", 00:05:42.383 "accel_error_inject_error", 00:05:42.383 "ioat_scan_accel_module", 00:05:42.383 "dsa_scan_accel_module", 00:05:42.383 "iaa_scan_accel_module", 00:05:42.383 "keyring_file_remove_key", 00:05:42.383 "keyring_file_add_key", 00:05:42.383 "keyring_linux_set_options", 00:05:42.383 "fsdev_aio_delete", 00:05:42.383 "fsdev_aio_create", 00:05:42.383 "iscsi_get_histogram", 00:05:42.383 "iscsi_enable_histogram", 00:05:42.383 "iscsi_set_options", 00:05:42.383 "iscsi_get_auth_groups", 00:05:42.383 "iscsi_auth_group_remove_secret", 00:05:42.383 "iscsi_auth_group_add_secret", 00:05:42.383 "iscsi_delete_auth_group", 00:05:42.383 "iscsi_create_auth_group", 00:05:42.383 "iscsi_set_discovery_auth", 00:05:42.383 "iscsi_get_options", 00:05:42.383 "iscsi_target_node_request_logout", 00:05:42.383 "iscsi_target_node_set_redirect", 00:05:42.383 "iscsi_target_node_set_auth", 00:05:42.383 "iscsi_target_node_add_lun", 00:05:42.383 "iscsi_get_stats", 00:05:42.383 "iscsi_get_connections", 00:05:42.383 "iscsi_portal_group_set_auth", 00:05:42.383 "iscsi_start_portal_group", 00:05:42.383 "iscsi_delete_portal_group", 00:05:42.383 "iscsi_create_portal_group", 00:05:42.383 "iscsi_get_portal_groups", 00:05:42.383 "iscsi_delete_target_node", 00:05:42.383 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.383 "iscsi_target_node_add_pg_ig_maps", 00:05:42.383 "iscsi_create_target_node", 00:05:42.383 "iscsi_get_target_nodes", 00:05:42.383 "iscsi_delete_initiator_group", 00:05:42.383 "iscsi_initiator_group_remove_initiators", 00:05:42.383 "iscsi_initiator_group_add_initiators", 00:05:42.383 "iscsi_create_initiator_group", 00:05:42.383 "iscsi_get_initiator_groups", 00:05:42.383 "nvmf_set_crdt", 00:05:42.383 "nvmf_set_config", 00:05:42.383 "nvmf_set_max_subsystems", 00:05:42.383 "nvmf_stop_mdns_prr", 00:05:42.383 "nvmf_publish_mdns_prr", 00:05:42.383 "nvmf_subsystem_get_listeners", 00:05:42.383 "nvmf_subsystem_get_qpairs", 00:05:42.383 "nvmf_subsystem_get_controllers", 00:05:42.383 "nvmf_get_stats", 00:05:42.383 "nvmf_get_transports", 00:05:42.383 "nvmf_create_transport", 00:05:42.383 "nvmf_get_targets", 00:05:42.383 "nvmf_delete_target", 00:05:42.383 "nvmf_create_target", 00:05:42.383 "nvmf_subsystem_allow_any_host", 00:05:42.383 "nvmf_subsystem_set_keys", 00:05:42.383 "nvmf_subsystem_remove_host", 00:05:42.383 "nvmf_subsystem_add_host", 00:05:42.383 "nvmf_ns_remove_host", 00:05:42.383 "nvmf_ns_add_host", 00:05:42.383 "nvmf_subsystem_remove_ns", 00:05:42.383 "nvmf_subsystem_set_ns_ana_group", 00:05:42.383 "nvmf_subsystem_add_ns", 00:05:42.383 "nvmf_subsystem_listener_set_ana_state", 00:05:42.383 "nvmf_discovery_get_referrals", 00:05:42.383 "nvmf_discovery_remove_referral", 00:05:42.383 "nvmf_discovery_add_referral", 00:05:42.383 "nvmf_subsystem_remove_listener", 00:05:42.383 "nvmf_subsystem_add_listener", 00:05:42.383 "nvmf_delete_subsystem", 00:05:42.383 "nvmf_create_subsystem", 00:05:42.383 "nvmf_get_subsystems", 00:05:42.383 "env_dpdk_get_mem_stats", 00:05:42.383 "nbd_get_disks", 00:05:42.383 "nbd_stop_disk", 00:05:42.383 "nbd_start_disk", 00:05:42.383 "ublk_recover_disk", 00:05:42.383 "ublk_get_disks", 00:05:42.383 "ublk_stop_disk", 00:05:42.383 "ublk_start_disk", 00:05:42.383 "ublk_destroy_target", 00:05:42.383 "ublk_create_target", 00:05:42.383 "virtio_blk_create_transport", 00:05:42.383 "virtio_blk_get_transports", 00:05:42.383 "vhost_controller_set_coalescing", 00:05:42.383 "vhost_get_controllers", 00:05:42.383 "vhost_delete_controller", 00:05:42.383 "vhost_create_blk_controller", 00:05:42.383 "vhost_scsi_controller_remove_target", 00:05:42.383 "vhost_scsi_controller_add_target", 00:05:42.383 "vhost_start_scsi_controller", 00:05:42.383 "vhost_create_scsi_controller", 00:05:42.383 "thread_set_cpumask", 00:05:42.383 "scheduler_set_options", 00:05:42.383 "framework_get_governor", 00:05:42.383 "framework_get_scheduler", 00:05:42.383 "framework_set_scheduler", 00:05:42.383 "framework_get_reactors", 00:05:42.383 "thread_get_io_channels", 00:05:42.383 "thread_get_pollers", 00:05:42.383 "thread_get_stats", 00:05:42.383 "framework_monitor_context_switch", 00:05:42.383 "spdk_kill_instance", 00:05:42.383 "log_enable_timestamps", 00:05:42.383 "log_get_flags", 00:05:42.383 "log_clear_flag", 00:05:42.383 "log_set_flag", 00:05:42.383 "log_get_level", 00:05:42.383 "log_set_level", 00:05:42.383 "log_get_print_level", 00:05:42.383 "log_set_print_level", 00:05:42.384 "framework_enable_cpumask_locks", 00:05:42.384 "framework_disable_cpumask_locks", 00:05:42.384 "framework_wait_init", 00:05:42.384 "framework_start_init", 00:05:42.384 "scsi_get_devices", 00:05:42.384 "bdev_get_histogram", 00:05:42.384 "bdev_enable_histogram", 00:05:42.384 "bdev_set_qos_limit", 00:05:42.384 "bdev_set_qd_sampling_period", 00:05:42.384 "bdev_get_bdevs", 00:05:42.384 "bdev_reset_iostat", 00:05:42.384 "bdev_get_iostat", 00:05:42.384 "bdev_examine", 00:05:42.384 "bdev_wait_for_examine", 00:05:42.384 "bdev_set_options", 00:05:42.384 "accel_get_stats", 00:05:42.384 "accel_set_options", 00:05:42.384 "accel_set_driver", 00:05:42.384 "accel_crypto_key_destroy", 00:05:42.384 "accel_crypto_keys_get", 00:05:42.384 "accel_crypto_key_create", 00:05:42.384 "accel_assign_opc", 00:05:42.384 "accel_get_module_info", 00:05:42.384 "accel_get_opc_assignments", 00:05:42.384 "vmd_rescan", 00:05:42.384 "vmd_remove_device", 00:05:42.384 "vmd_enable", 00:05:42.384 "sock_get_default_impl", 00:05:42.384 "sock_set_default_impl", 00:05:42.384 "sock_impl_set_options", 00:05:42.384 "sock_impl_get_options", 00:05:42.384 "iobuf_get_stats", 00:05:42.384 "iobuf_set_options", 00:05:42.384 "keyring_get_keys", 00:05:42.384 "framework_get_pci_devices", 00:05:42.384 "framework_get_config", 00:05:42.384 "framework_get_subsystems", 00:05:42.384 "fsdev_set_opts", 00:05:42.384 "fsdev_get_opts", 00:05:42.384 "trace_get_info", 00:05:42.384 "trace_get_tpoint_group_mask", 00:05:42.384 "trace_disable_tpoint_group", 00:05:42.384 "trace_enable_tpoint_group", 00:05:42.384 "trace_clear_tpoint_mask", 00:05:42.384 "trace_set_tpoint_mask", 00:05:42.384 "notify_get_notifications", 00:05:42.384 "notify_get_types", 00:05:42.384 "spdk_get_version", 00:05:42.384 "rpc_get_methods" 00:05:42.384 ] 00:05:42.384 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.384 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.384 15:53:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58044 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58044 ']' 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58044 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58044 00:05:42.384 killing process with pid 58044 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58044' 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58044 00:05:42.384 15:53:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58044 00:05:43.754 ************************************ 00:05:43.754 END TEST spdkcli_tcp 00:05:43.754 ************************************ 00:05:43.754 00:05:43.754 real 0m2.531s 00:05:43.754 user 0m4.475s 00:05:43.754 sys 0m0.429s 00:05:43.754 15:53:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.754 15:53:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.754 15:53:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.754 15:53:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.754 15:53:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.754 15:53:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.754 ************************************ 00:05:43.754 START TEST dpdk_mem_utility 00:05:43.754 ************************************ 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.754 * Looking for test storage... 00:05:43.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.754 15:53:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.754 --rc genhtml_branch_coverage=1 00:05:43.754 --rc genhtml_function_coverage=1 00:05:43.754 --rc genhtml_legend=1 00:05:43.754 --rc geninfo_all_blocks=1 00:05:43.754 --rc geninfo_unexecuted_blocks=1 00:05:43.754 00:05:43.754 ' 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.754 --rc genhtml_branch_coverage=1 00:05:43.754 --rc genhtml_function_coverage=1 00:05:43.754 --rc genhtml_legend=1 00:05:43.754 --rc geninfo_all_blocks=1 00:05:43.754 --rc geninfo_unexecuted_blocks=1 00:05:43.754 00:05:43.754 ' 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.754 --rc genhtml_branch_coverage=1 00:05:43.754 --rc genhtml_function_coverage=1 00:05:43.754 --rc genhtml_legend=1 00:05:43.754 --rc geninfo_all_blocks=1 00:05:43.754 --rc geninfo_unexecuted_blocks=1 00:05:43.754 00:05:43.754 ' 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.754 --rc genhtml_branch_coverage=1 00:05:43.754 --rc genhtml_function_coverage=1 00:05:43.754 --rc genhtml_legend=1 00:05:43.754 --rc geninfo_all_blocks=1 00:05:43.754 --rc geninfo_unexecuted_blocks=1 00:05:43.754 00:05:43.754 ' 00:05:43.754 15:53:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.754 15:53:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58144 00:05:43.754 15:53:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58144 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58144 ']' 00:05:43.754 15:53:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.754 15:53:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.012 [2024-11-20 15:53:42.059633] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:44.012 [2024-11-20 15:53:42.059886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58144 ] 00:05:44.012 [2024-11-20 15:53:42.218505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.270 [2024-11-20 15:53:42.339185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.834 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.834 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.834 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.834 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.834 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.834 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.834 { 00:05:44.834 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.834 } 00:05:44.834 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.834 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.834 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:44.834 1 heaps totaling size 824.000000 MiB 00:05:44.834 size: 824.000000 MiB heap id: 0 00:05:44.834 end heaps---------- 00:05:44.834 9 mempools totaling size 603.782043 MiB 00:05:44.834 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.834 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.835 size: 100.555481 MiB name: bdev_io_58144 00:05:44.835 size: 50.003479 MiB name: msgpool_58144 00:05:44.835 size: 36.509338 MiB name: fsdev_io_58144 00:05:44.835 size: 21.763794 MiB name: PDU_Pool 00:05:44.835 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.835 size: 4.133484 MiB name: evtpool_58144 00:05:44.835 size: 0.026123 MiB name: Session_Pool 00:05:44.835 end mempools------- 00:05:44.835 6 memzones totaling size 4.142822 MiB 00:05:44.835 size: 1.000366 MiB name: RG_ring_0_58144 00:05:44.835 size: 1.000366 MiB name: RG_ring_1_58144 00:05:44.835 size: 1.000366 MiB name: RG_ring_4_58144 00:05:44.835 size: 1.000366 MiB name: RG_ring_5_58144 00:05:44.835 size: 0.125366 MiB name: RG_ring_2_58144 00:05:44.835 size: 0.015991 MiB name: RG_ring_3_58144 00:05:44.835 end memzones------- 00:05:44.835 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.094 heap id: 0 total size: 824.000000 MiB number of busy elements: 314 number of free elements: 18 00:05:45.094 list of free elements. size: 16.781616 MiB 00:05:45.094 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:45.094 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:45.094 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:45.094 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:45.094 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:45.094 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:45.094 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:45.094 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:45.094 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:45.094 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:45.094 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:45.094 element at address: 0x20001b400000 with size: 0.562927 MiB 00:05:45.094 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:45.094 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:45.094 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:45.094 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:45.094 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:45.094 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:45.094 list of standard malloc elements. size: 199.287476 MiB 00:05:45.094 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:45.094 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:45.094 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:45.094 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:45.094 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:45.094 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:45.094 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:45.094 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:45.094 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:45.094 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:45.094 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:45.094 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:45.094 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:45.094 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:45.095 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:45.095 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:45.096 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:45.096 list of memzone associated elements. size: 607.930908 MiB 00:05:45.096 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:45.096 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.096 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:45.096 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.096 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:45.096 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58144_0 00:05:45.096 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:45.096 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58144_0 00:05:45.096 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:45.096 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58144_0 00:05:45.096 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:45.096 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.096 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:45.096 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.096 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:45.096 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58144_0 00:05:45.096 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:45.096 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58144 00:05:45.096 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:45.096 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58144 00:05:45.096 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:45.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.096 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:45.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.096 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:45.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.096 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:45.096 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.096 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:45.096 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58144 00:05:45.096 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:45.096 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58144 00:05:45.096 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:45.096 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58144 00:05:45.096 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:45.096 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58144 00:05:45.096 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:45.096 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58144 00:05:45.096 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:45.096 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58144 00:05:45.096 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:45.096 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.096 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:45.096 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.096 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:45.096 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.096 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:45.096 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58144 00:05:45.096 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:45.096 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58144 00:05:45.096 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:45.096 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.096 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:45.096 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.096 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:45.096 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58144 00:05:45.096 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:45.096 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.096 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:45.096 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58144 00:05:45.096 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:45.096 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58144 00:05:45.096 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:45.096 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58144 00:05:45.096 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:45.096 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.096 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.096 15:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58144 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58144 ']' 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58144 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58144 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.096 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58144' 00:05:45.096 killing process with pid 58144 00:05:45.097 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58144 00:05:45.097 15:53:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58144 00:05:46.996 00:05:46.996 real 0m2.956s 00:05:46.996 user 0m2.911s 00:05:46.996 sys 0m0.459s 00:05:46.996 15:53:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.996 15:53:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.996 ************************************ 00:05:46.996 END TEST dpdk_mem_utility 00:05:46.996 ************************************ 00:05:46.996 15:53:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.996 15:53:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.996 15:53:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.996 15:53:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.996 ************************************ 00:05:46.996 START TEST event 00:05:46.996 ************************************ 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.996 * Looking for test storage... 00:05:46.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.996 15:53:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.996 15:53:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.996 15:53:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.996 15:53:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.996 15:53:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.996 15:53:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.996 15:53:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.996 15:53:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.996 15:53:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.996 15:53:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.996 15:53:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.996 15:53:44 event -- scripts/common.sh@344 -- # case "$op" in 00:05:46.996 15:53:44 event -- scripts/common.sh@345 -- # : 1 00:05:46.996 15:53:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.996 15:53:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.996 15:53:44 event -- scripts/common.sh@365 -- # decimal 1 00:05:46.996 15:53:44 event -- scripts/common.sh@353 -- # local d=1 00:05:46.996 15:53:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.996 15:53:44 event -- scripts/common.sh@355 -- # echo 1 00:05:46.996 15:53:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.996 15:53:44 event -- scripts/common.sh@366 -- # decimal 2 00:05:46.996 15:53:44 event -- scripts/common.sh@353 -- # local d=2 00:05:46.996 15:53:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.996 15:53:44 event -- scripts/common.sh@355 -- # echo 2 00:05:46.996 15:53:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.996 15:53:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.996 15:53:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.996 15:53:44 event -- scripts/common.sh@368 -- # return 0 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.996 --rc genhtml_branch_coverage=1 00:05:46.996 --rc genhtml_function_coverage=1 00:05:46.996 --rc genhtml_legend=1 00:05:46.996 --rc geninfo_all_blocks=1 00:05:46.996 --rc geninfo_unexecuted_blocks=1 00:05:46.996 00:05:46.996 ' 00:05:46.996 15:53:44 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.996 --rc genhtml_branch_coverage=1 00:05:46.996 --rc genhtml_function_coverage=1 00:05:46.996 --rc genhtml_legend=1 00:05:46.996 --rc geninfo_all_blocks=1 00:05:46.996 --rc geninfo_unexecuted_blocks=1 00:05:46.996 00:05:46.996 ' 00:05:46.996 15:53:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.996 --rc genhtml_branch_coverage=1 00:05:46.996 --rc genhtml_function_coverage=1 00:05:46.996 --rc genhtml_legend=1 00:05:46.996 --rc geninfo_all_blocks=1 00:05:46.996 --rc geninfo_unexecuted_blocks=1 00:05:46.996 00:05:46.996 ' 00:05:46.996 15:53:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.996 --rc genhtml_branch_coverage=1 00:05:46.996 --rc genhtml_function_coverage=1 00:05:46.996 --rc genhtml_legend=1 00:05:46.996 --rc geninfo_all_blocks=1 00:05:46.996 --rc geninfo_unexecuted_blocks=1 00:05:46.996 00:05:46.996 ' 00:05:46.996 15:53:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.996 15:53:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.996 15:53:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.996 15:53:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.997 15:53:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.997 15:53:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.997 ************************************ 00:05:46.997 START TEST event_perf 00:05:46.997 ************************************ 00:05:46.997 15:53:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.997 Running I/O for 1 seconds...[2024-11-20 15:53:45.041484] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:46.997 [2024-11-20 15:53:45.041608] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ] 00:05:46.997 [2024-11-20 15:53:45.202924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.254 [2024-11-20 15:53:45.329540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.254 [2024-11-20 15:53:45.329846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.254 [2024-11-20 15:53:45.330076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.254 Running I/O for 1 seconds...[2024-11-20 15:53:45.330102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.652 00:05:48.652 lcore 0: 134011 00:05:48.652 lcore 1: 134008 00:05:48.652 lcore 2: 134010 00:05:48.652 lcore 3: 134013 00:05:48.652 done. 00:05:48.652 00:05:48.652 real 0m1.500s 00:05:48.652 user 0m4.280s 00:05:48.652 sys 0m0.093s 00:05:48.652 15:53:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.652 15:53:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.652 ************************************ 00:05:48.652 END TEST event_perf 00:05:48.652 ************************************ 00:05:48.652 15:53:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.652 15:53:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:48.652 15:53:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.652 15:53:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.652 ************************************ 00:05:48.652 START TEST event_reactor 00:05:48.652 ************************************ 00:05:48.652 15:53:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.652 [2024-11-20 15:53:46.587951] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:48.652 [2024-11-20 15:53:46.588124] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:05:48.652 [2024-11-20 15:53:46.762104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.652 [2024-11-20 15:53:46.881746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.027 test_start 00:05:50.027 oneshot 00:05:50.027 tick 100 00:05:50.027 tick 100 00:05:50.027 tick 250 00:05:50.027 tick 100 00:05:50.027 tick 100 00:05:50.027 tick 100 00:05:50.027 tick 250 00:05:50.027 tick 500 00:05:50.027 tick 100 00:05:50.027 tick 100 00:05:50.027 tick 250 00:05:50.027 tick 100 00:05:50.027 tick 100 00:05:50.027 test_end 00:05:50.027 00:05:50.027 real 0m1.503s 00:05:50.027 user 0m1.312s 00:05:50.027 sys 0m0.080s 00:05:50.027 15:53:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.027 ************************************ 00:05:50.027 15:53:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.027 END TEST event_reactor 00:05:50.027 ************************************ 00:05:50.027 15:53:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.027 15:53:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.027 15:53:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.027 15:53:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.027 ************************************ 00:05:50.027 START TEST event_reactor_perf 00:05:50.027 ************************************ 00:05:50.027 15:53:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.027 [2024-11-20 15:53:48.131456] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:50.027 [2024-11-20 15:53:48.131612] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58317 ] 00:05:50.284 [2024-11-20 15:53:48.292193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.284 [2024-11-20 15:53:48.395788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.655 test_start 00:05:51.655 test_end 00:05:51.655 Performance: 386542 events per second 00:05:51.655 ************************************ 00:05:51.655 END TEST event_reactor_perf 00:05:51.655 ************************************ 00:05:51.655 00:05:51.655 real 0m1.435s 00:05:51.655 user 0m1.251s 00:05:51.655 sys 0m0.076s 00:05:51.655 15:53:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.655 15:53:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.655 15:53:49 event -- event/event.sh@49 -- # uname -s 00:05:51.655 15:53:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.655 15:53:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.655 15:53:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.655 15:53:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.655 15:53:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.655 ************************************ 00:05:51.655 START TEST event_scheduler 00:05:51.655 ************************************ 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.655 * Looking for test storage... 00:05:51.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:51.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.655 15:53:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.655 --rc genhtml_branch_coverage=1 00:05:51.655 --rc genhtml_function_coverage=1 00:05:51.655 --rc genhtml_legend=1 00:05:51.655 --rc geninfo_all_blocks=1 00:05:51.655 --rc geninfo_unexecuted_blocks=1 00:05:51.655 00:05:51.655 ' 00:05:51.655 15:53:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.656 --rc genhtml_branch_coverage=1 00:05:51.656 --rc genhtml_function_coverage=1 00:05:51.656 --rc genhtml_legend=1 00:05:51.656 --rc geninfo_all_blocks=1 00:05:51.656 --rc geninfo_unexecuted_blocks=1 00:05:51.656 00:05:51.656 ' 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.656 --rc genhtml_branch_coverage=1 00:05:51.656 --rc genhtml_function_coverage=1 00:05:51.656 --rc genhtml_legend=1 00:05:51.656 --rc geninfo_all_blocks=1 00:05:51.656 --rc geninfo_unexecuted_blocks=1 00:05:51.656 00:05:51.656 ' 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.656 --rc genhtml_branch_coverage=1 00:05:51.656 --rc genhtml_function_coverage=1 00:05:51.656 --rc genhtml_legend=1 00:05:51.656 --rc geninfo_all_blocks=1 00:05:51.656 --rc geninfo_unexecuted_blocks=1 00:05:51.656 00:05:51.656 ' 00:05:51.656 15:53:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.656 15:53:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58393 00:05:51.656 15:53:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.656 15:53:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58393 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58393 ']' 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.656 15:53:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.656 15:53:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.656 [2024-11-20 15:53:49.779984] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:51.656 [2024-11-20 15:53:49.780745] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58393 ] 00:05:51.912 [2024-11-20 15:53:49.943203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.912 [2024-11-20 15:53:50.079132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.912 [2024-11-20 15:53:50.079571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.912 [2024-11-20 15:53:50.079662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.912 [2024-11-20 15:53:50.079671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:52.477 15:53:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.477 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.477 POWER: Cannot set governor of lcore 0 to performance 00:05:52.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.477 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.477 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.477 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:52.477 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:52.477 POWER: Unable to set Power Management Environment for lcore 0 00:05:52.477 [2024-11-20 15:53:50.641367] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:52.477 [2024-11-20 15:53:50.641392] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:52.477 [2024-11-20 15:53:50.641403] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.477 [2024-11-20 15:53:50.641420] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.477 [2024-11-20 15:53:50.641428] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.477 [2024-11-20 15:53:50.641438] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.477 15:53:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.477 15:53:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.735 [2024-11-20 15:53:50.890316] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.735 15:53:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.735 15:53:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.735 15:53:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.735 15:53:50 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 ************************************ 00:05:52.736 START TEST scheduler_create_thread 00:05:52.736 ************************************ 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 2 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 3 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 4 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 5 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 6 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 7 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 8 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 9 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.736 10 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.736 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.994 15:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.994 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.994 15:53:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.994 15:53:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.994 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.994 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.251 ************************************ 00:05:53.251 END TEST scheduler_create_thread 00:05:53.251 ************************************ 00:05:53.251 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.251 00:05:53.251 real 0m0.593s 00:05:53.251 user 0m0.010s 00:05:53.251 sys 0m0.007s 00:05:53.251 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.251 15:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.508 15:53:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.508 15:53:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58393 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58393 ']' 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58393 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58393 00:05:53.508 killing process with pid 58393 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58393' 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58393 00:05:53.508 15:53:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58393 00:05:53.766 [2024-11-20 15:53:51.975664] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.699 00:05:54.699 real 0m3.187s 00:05:54.699 user 0m6.059s 00:05:54.699 sys 0m0.393s 00:05:54.699 15:53:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.699 ************************************ 00:05:54.699 END TEST event_scheduler 00:05:54.699 ************************************ 00:05:54.699 15:53:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.699 15:53:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.699 15:53:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.699 15:53:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.699 15:53:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.699 15:53:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.699 ************************************ 00:05:54.699 START TEST app_repeat 00:05:54.699 ************************************ 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.699 Process app_repeat pid: 58477 00:05:54.699 spdk_app_start Round 0 00:05:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58477 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58477' 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58477 /var/tmp/spdk-nbd.sock 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58477 ']' 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.699 15:53:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.699 15:53:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.699 [2024-11-20 15:53:52.860576] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:54.699 [2024-11-20 15:53:52.860690] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58477 ] 00:05:54.956 [2024-11-20 15:53:53.015512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.956 [2024-11-20 15:53:53.133931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.956 [2024-11-20 15:53:53.134046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.521 15:53:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.521 15:53:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.521 15:53:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.779 Malloc0 00:05:55.779 15:53:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.038 Malloc1 00:05:56.038 15:53:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.038 15:53:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.297 /dev/nbd0 00:05:56.297 15:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.297 15:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.297 1+0 records in 00:05:56.297 1+0 records out 00:05:56.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199155 s, 20.6 MB/s 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.297 15:53:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.297 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.297 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.297 15:53:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.555 /dev/nbd1 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.555 1+0 records in 00:05:56.555 1+0 records out 00:05:56.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310934 s, 13.2 MB/s 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.555 15:53:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.555 15:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.814 15:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.814 { 00:05:56.814 "nbd_device": "/dev/nbd0", 00:05:56.814 "bdev_name": "Malloc0" 00:05:56.814 }, 00:05:56.814 { 00:05:56.814 "nbd_device": "/dev/nbd1", 00:05:56.814 "bdev_name": "Malloc1" 00:05:56.814 } 00:05:56.814 ]' 00:05:56.814 15:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.814 15:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.814 { 00:05:56.814 "nbd_device": "/dev/nbd0", 00:05:56.814 "bdev_name": "Malloc0" 00:05:56.814 }, 00:05:56.814 { 00:05:56.814 "nbd_device": "/dev/nbd1", 00:05:56.814 "bdev_name": "Malloc1" 00:05:56.814 } 00:05:56.814 ]' 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.814 /dev/nbd1' 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.814 /dev/nbd1' 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.814 256+0 records in 00:05:56.814 256+0 records out 00:05:56.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0096432 s, 109 MB/s 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.814 15:53:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.073 256+0 records in 00:05:57.073 256+0 records out 00:05:57.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244226 s, 42.9 MB/s 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.073 256+0 records in 00:05:57.073 256+0 records out 00:05:57.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211136 s, 49.7 MB/s 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.073 15:53:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.333 15:53:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.594 15:53:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.594 15:53:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.907 15:53:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.876 [2024-11-20 15:53:56.904487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.876 [2024-11-20 15:53:57.013793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.876 [2024-11-20 15:53:57.013805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.134 [2024-11-20 15:53:57.151532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.134 [2024-11-20 15:53:57.151795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.037 15:53:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.037 spdk_app_start Round 1 00:06:01.037 15:53:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.037 15:53:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58477 /var/tmp/spdk-nbd.sock 00:06:01.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58477 ']' 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.037 15:53:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.294 15:53:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.294 15:53:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.294 15:53:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.294 Malloc0 00:06:01.551 15:53:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.551 Malloc1 00:06:01.551 15:53:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.551 15:53:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.809 /dev/nbd0 00:06:01.809 15:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.809 15:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.809 1+0 records in 00:06:01.809 1+0 records out 00:06:01.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402968 s, 10.2 MB/s 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.809 15:54:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.809 15:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.809 15:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.809 15:54:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.067 /dev/nbd1 00:06:02.067 15:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.067 15:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.067 15:54:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.067 15:54:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.067 15:54:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.067 15:54:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.067 15:54:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.324 1+0 records in 00:06:02.324 1+0 records out 00:06:02.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323485 s, 12.7 MB/s 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.324 15:54:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.324 { 00:06:02.324 "nbd_device": "/dev/nbd0", 00:06:02.324 "bdev_name": "Malloc0" 00:06:02.324 }, 00:06:02.324 { 00:06:02.324 "nbd_device": "/dev/nbd1", 00:06:02.324 "bdev_name": "Malloc1" 00:06:02.324 } 00:06:02.324 ]' 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.324 { 00:06:02.324 "nbd_device": "/dev/nbd0", 00:06:02.324 "bdev_name": "Malloc0" 00:06:02.324 }, 00:06:02.324 { 00:06:02.324 "nbd_device": "/dev/nbd1", 00:06:02.324 "bdev_name": "Malloc1" 00:06:02.324 } 00:06:02.324 ]' 00:06:02.324 15:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.581 /dev/nbd1' 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.581 /dev/nbd1' 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.581 15:54:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.582 256+0 records in 00:06:02.582 256+0 records out 00:06:02.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649901 s, 161 MB/s 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.582 256+0 records in 00:06:02.582 256+0 records out 00:06:02.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165559 s, 63.3 MB/s 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.582 256+0 records in 00:06:02.582 256+0 records out 00:06:02.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153629 s, 68.3 MB/s 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.582 15:54:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.840 15:54:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.840 15:54:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.098 15:54:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.098 15:54:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.663 15:54:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.228 [2024-11-20 15:54:02.227136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.228 [2024-11-20 15:54:02.312352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.228 [2024-11-20 15:54:02.312390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.228 [2024-11-20 15:54:02.419417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.228 [2024-11-20 15:54:02.419669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.754 spdk_app_start Round 2 00:06:06.754 15:54:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.754 15:54:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.754 15:54:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58477 /var/tmp/spdk-nbd.sock 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58477 ']' 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.755 15:54:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.755 15:54:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.012 Malloc0 00:06:07.012 15:54:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.271 Malloc1 00:06:07.271 15:54:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.271 15:54:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.567 /dev/nbd0 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.567 1+0 records in 00:06:07.567 1+0 records out 00:06:07.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267132 s, 15.3 MB/s 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.567 /dev/nbd1 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.567 15:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.567 1+0 records in 00:06:07.567 1+0 records out 00:06:07.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163926 s, 25.0 MB/s 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.567 15:54:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.568 15:54:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.568 15:54:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.568 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.568 15:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.568 15:54:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.568 15:54:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.568 15:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.840 15:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.840 { 00:06:07.840 "nbd_device": "/dev/nbd0", 00:06:07.840 "bdev_name": "Malloc0" 00:06:07.840 }, 00:06:07.840 { 00:06:07.840 "nbd_device": "/dev/nbd1", 00:06:07.840 "bdev_name": "Malloc1" 00:06:07.840 } 00:06:07.840 ]' 00:06:07.840 15:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.840 { 00:06:07.840 "nbd_device": "/dev/nbd0", 00:06:07.840 "bdev_name": "Malloc0" 00:06:07.840 }, 00:06:07.840 { 00:06:07.840 "nbd_device": "/dev/nbd1", 00:06:07.840 "bdev_name": "Malloc1" 00:06:07.840 } 00:06:07.840 ]' 00:06:07.840 15:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.840 /dev/nbd1' 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.840 /dev/nbd1' 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.840 15:54:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.840 256+0 records in 00:06:07.840 256+0 records out 00:06:07.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101311 s, 104 MB/s 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.841 256+0 records in 00:06:07.841 256+0 records out 00:06:07.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137183 s, 76.4 MB/s 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.841 256+0 records in 00:06:07.841 256+0 records out 00:06:07.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149995 s, 69.9 MB/s 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.841 15:54:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.099 15:54:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.357 15:54:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.614 15:54:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.614 15:54:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.871 15:54:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.436 [2024-11-20 15:54:07.607558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.436 [2024-11-20 15:54:07.683370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.436 [2024-11-20 15:54:07.683502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.693 [2024-11-20 15:54:07.787216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.693 [2024-11-20 15:54:07.787266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.230 15:54:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58477 /var/tmp/spdk-nbd.sock 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58477 ']' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.230 15:54:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58477 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58477 ']' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58477 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58477 00:06:12.230 killing process with pid 58477 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58477' 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58477 00:06:12.230 15:54:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58477 00:06:12.795 spdk_app_start is called in Round 0. 00:06:12.795 Shutdown signal received, stop current app iteration 00:06:12.795 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:06:12.795 spdk_app_start is called in Round 1. 00:06:12.795 Shutdown signal received, stop current app iteration 00:06:12.795 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:06:12.795 spdk_app_start is called in Round 2. 00:06:12.795 Shutdown signal received, stop current app iteration 00:06:12.795 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:06:12.795 spdk_app_start is called in Round 3. 00:06:12.795 Shutdown signal received, stop current app iteration 00:06:12.795 15:54:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.795 15:54:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.795 00:06:12.795 real 0m18.002s 00:06:12.795 user 0m39.391s 00:06:12.795 sys 0m2.192s 00:06:12.795 15:54:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.795 ************************************ 00:06:12.795 END TEST app_repeat 00:06:12.795 ************************************ 00:06:12.795 15:54:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.795 15:54:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.795 15:54:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.795 15:54:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.795 15:54:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.795 15:54:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.795 ************************************ 00:06:12.795 START TEST cpu_locks 00:06:12.795 ************************************ 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.795 * Looking for test storage... 00:06:12.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.795 15:54:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.795 --rc genhtml_branch_coverage=1 00:06:12.795 --rc genhtml_function_coverage=1 00:06:12.795 --rc genhtml_legend=1 00:06:12.795 --rc geninfo_all_blocks=1 00:06:12.795 --rc geninfo_unexecuted_blocks=1 00:06:12.795 00:06:12.795 ' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.795 --rc genhtml_branch_coverage=1 00:06:12.795 --rc genhtml_function_coverage=1 00:06:12.795 --rc genhtml_legend=1 00:06:12.795 --rc geninfo_all_blocks=1 00:06:12.795 --rc geninfo_unexecuted_blocks=1 00:06:12.795 00:06:12.795 ' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.795 --rc genhtml_branch_coverage=1 00:06:12.795 --rc genhtml_function_coverage=1 00:06:12.795 --rc genhtml_legend=1 00:06:12.795 --rc geninfo_all_blocks=1 00:06:12.795 --rc geninfo_unexecuted_blocks=1 00:06:12.795 00:06:12.795 ' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.795 --rc genhtml_branch_coverage=1 00:06:12.795 --rc genhtml_function_coverage=1 00:06:12.795 --rc genhtml_legend=1 00:06:12.795 --rc geninfo_all_blocks=1 00:06:12.795 --rc geninfo_unexecuted_blocks=1 00:06:12.795 00:06:12.795 ' 00:06:12.795 15:54:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.795 15:54:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.795 15:54:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.795 15:54:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.795 15:54:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.795 ************************************ 00:06:12.795 START TEST default_locks 00:06:12.795 ************************************ 00:06:12.795 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:12.795 15:54:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58913 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58913 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58913 ']' 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.796 15:54:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.053 [2024-11-20 15:54:11.062098] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:13.053 [2024-11-20 15:54:11.062757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:06:13.053 [2024-11-20 15:54:11.216680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.053 [2024-11-20 15:54:11.298906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.618 15:54:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.618 15:54:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:13.618 15:54:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58913 00:06:13.618 15:54:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58913 00:06:13.618 15:54:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58913 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58913 ']' 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58913 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58913 00:06:13.875 killing process with pid 58913 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.875 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58913' 00:06:13.876 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58913 00:06:13.876 15:54:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58913 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58913 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58913 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58913 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58913 ']' 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.248 ERROR: process (pid: 58913) is no longer running 00:06:15.248 ************************************ 00:06:15.248 END TEST default_locks 00:06:15.248 ************************************ 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58913) - No such process 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.248 00:06:15.248 real 0m2.279s 00:06:15.248 user 0m2.247s 00:06:15.248 sys 0m0.431s 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.248 15:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 15:54:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.248 15:54:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.248 15:54:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.248 15:54:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 START TEST default_locks_via_rpc 00:06:15.248 ************************************ 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58966 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58966 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.248 15:54:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 [2024-11-20 15:54:13.384620] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:15.248 [2024-11-20 15:54:13.384908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:06:15.506 [2024-11-20 15:54:13.540513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.506 [2024-11-20 15:54:13.622697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58966 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58966 00:06:16.070 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58966 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58966 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:06:16.328 killing process with pid 58966 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58966 00:06:16.328 15:54:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58966 00:06:17.700 ************************************ 00:06:17.700 END TEST default_locks_via_rpc 00:06:17.700 ************************************ 00:06:17.700 00:06:17.700 real 0m2.360s 00:06:17.700 user 0m2.391s 00:06:17.700 sys 0m0.422s 00:06:17.700 15:54:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.700 15:54:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.700 15:54:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.700 15:54:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.700 15:54:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.700 15:54:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.700 ************************************ 00:06:17.700 START TEST non_locking_app_on_locked_coremask 00:06:17.700 ************************************ 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59018 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59018 /var/tmp/spdk.sock 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59018 ']' 00:06:17.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.700 15:54:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.700 [2024-11-20 15:54:15.775860] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:17.700 [2024-11-20 15:54:15.775955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:06:17.700 [2024-11-20 15:54:15.925903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.957 [2024-11-20 15:54:16.007790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59034 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59034 /var/tmp/spdk2.sock 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59034 ']' 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.524 15:54:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.524 [2024-11-20 15:54:16.697546] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:18.524 [2024-11-20 15:54:16.697841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59034 ] 00:06:18.782 [2024-11-20 15:54:16.861745] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.782 [2024-11-20 15:54:16.861802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.782 [2024-11-20 15:54:17.022201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.713 15:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.713 15:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.713 15:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59018 00:06:19.969 15:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.969 15:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59018 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59018 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59018 ']' 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59018 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59018 00:06:20.226 killing process with pid 59018 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59018' 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59018 00:06:20.226 15:54:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59018 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59034 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59034 ']' 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59034 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59034 00:06:22.759 killing process with pid 59034 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59034' 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59034 00:06:22.759 15:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59034 00:06:23.699 00:06:23.699 real 0m6.180s 00:06:23.699 user 0m6.505s 00:06:23.699 sys 0m0.761s 00:06:23.699 15:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.699 ************************************ 00:06:23.699 15:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.699 END TEST non_locking_app_on_locked_coremask 00:06:23.699 ************************************ 00:06:23.699 15:54:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.699 15:54:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.699 15:54:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.699 15:54:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.699 ************************************ 00:06:23.699 START TEST locking_app_on_unlocked_coremask 00:06:23.699 ************************************ 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59125 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59125 /var/tmp/spdk.sock 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59125 ']' 00:06:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.699 15:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.956 [2024-11-20 15:54:22.010328] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:23.956 [2024-11-20 15:54:22.010439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59125 ] 00:06:23.956 [2024-11-20 15:54:22.166945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.956 [2024-11-20 15:54:22.166982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.213 [2024-11-20 15:54:22.247578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59141 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59141 /var/tmp/spdk2.sock 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59141 ']' 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.777 15:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.777 [2024-11-20 15:54:22.901946] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:24.777 [2024-11-20 15:54:22.902075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:06:25.033 [2024-11-20 15:54:23.065026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.033 [2024-11-20 15:54:23.232082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.965 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.965 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.965 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59141 00:06:25.965 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59141 00:06:25.965 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59125 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59125 ']' 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59125 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59125 00:06:26.530 killing process with pid 59125 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59125' 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59125 00:06:26.530 15:54:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59125 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59141 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59141 ']' 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59141 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59141 00:06:29.052 killing process with pid 59141 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59141' 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59141 00:06:29.052 15:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59141 00:06:30.420 ************************************ 00:06:30.420 END TEST locking_app_on_unlocked_coremask 00:06:30.420 ************************************ 00:06:30.420 00:06:30.420 real 0m6.322s 00:06:30.420 user 0m6.577s 00:06:30.420 sys 0m0.846s 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.420 15:54:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:30.420 15:54:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.420 15:54:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.420 15:54:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.420 ************************************ 00:06:30.420 START TEST locking_app_on_locked_coremask 00:06:30.420 ************************************ 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59238 00:06:30.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.420 15:54:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.420 [2024-11-20 15:54:28.363970] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:30.420 [2024-11-20 15:54:28.364065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:06:30.420 [2024-11-20 15:54:28.513093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.420 [2024-11-20 15:54:28.597235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59248 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59248 /var/tmp/spdk2.sock 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59248 /var/tmp/spdk2.sock 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59248 /var/tmp/spdk2.sock 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59248 ']' 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.984 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.985 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.241 [2024-11-20 15:54:29.310768] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:31.241 [2024-11-20 15:54:29.311151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:06:31.558 [2024-11-20 15:54:29.492987] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59238 has claimed it. 00:06:31.558 [2024-11-20 15:54:29.493051] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.818 ERROR: process (pid: 59248) is no longer running 00:06:31.818 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59248) - No such process 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59238 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59238 00:06:31.818 15:54:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59238 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59238 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59238 00:06:32.077 killing process with pid 59238 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59238' 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59238 00:06:32.077 15:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59238 00:06:33.454 ************************************ 00:06:33.454 END TEST locking_app_on_locked_coremask 00:06:33.454 ************************************ 00:06:33.454 00:06:33.454 real 0m3.154s 00:06:33.454 user 0m3.464s 00:06:33.454 sys 0m0.559s 00:06:33.454 15:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.454 15:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 15:54:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.454 15:54:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.454 15:54:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.454 15:54:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 ************************************ 00:06:33.454 START TEST locking_overlapped_coremask 00:06:33.454 ************************************ 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59307 00:06:33.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.454 15:54:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 [2024-11-20 15:54:31.599333] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:33.454 [2024-11-20 15:54:31.599445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:06:33.716 [2024-11-20 15:54:31.761307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.716 [2024-11-20 15:54:31.867032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.716 [2024-11-20 15:54:31.867587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.716 [2024-11-20 15:54:31.867777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59325 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59325 /var/tmp/spdk2.sock 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59325 /var/tmp/spdk2.sock 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.287 15:54:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.545 [2024-11-20 15:54:32.543069] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:34.545 [2024-11-20 15:54:32.543337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:06:34.545 [2024-11-20 15:54:32.717596] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59307 has claimed it. 00:06:34.545 [2024-11-20 15:54:32.717653] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.114 ERROR: process (pid: 59325) is no longer running 00:06:35.114 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59325) - No such process 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59307 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59307 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:06:35.114 killing process with pid 59307 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59307 00:06:35.114 15:54:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59307 00:06:36.523 ************************************ 00:06:36.523 END TEST locking_overlapped_coremask 00:06:36.523 ************************************ 00:06:36.523 00:06:36.523 real 0m3.232s 00:06:36.523 user 0m8.771s 00:06:36.523 sys 0m0.452s 00:06:36.523 15:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.523 15:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.784 15:54:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.784 15:54:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.784 15:54:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.784 15:54:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.784 ************************************ 00:06:36.784 START TEST locking_overlapped_coremask_via_rpc 00:06:36.784 ************************************ 00:06:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59383 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59383 /var/tmp/spdk.sock 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59383 ']' 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.784 15:54:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.784 [2024-11-20 15:54:34.896035] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:36.784 [2024-11-20 15:54:34.896161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ] 00:06:37.045 [2024-11-20 15:54:35.053363] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.045 [2024-11-20 15:54:35.053576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.046 [2024-11-20 15:54:35.159140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.046 [2024-11-20 15:54:35.159619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.046 [2024-11-20 15:54:35.159771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59400 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59400 /var/tmp/spdk2.sock 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59400 ']' 00:06:37.615 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.616 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.616 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.616 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.616 15:54:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.616 [2024-11-20 15:54:35.811453] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:37.616 [2024-11-20 15:54:35.811681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59400 ] 00:06:37.876 [2024-11-20 15:54:35.980412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.876 [2024-11-20 15:54:35.980471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.137 [2024-11-20 15:54:36.186911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.137 [2024-11-20 15:54:36.189878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.137 [2024-11-20 15:54:36.189903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.521 [2024-11-20 15:54:37.363909] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59383 has claimed it. 00:06:39.521 request: 00:06:39.521 { 00:06:39.521 "method": "framework_enable_cpumask_locks", 00:06:39.521 "req_id": 1 00:06:39.521 } 00:06:39.521 Got JSON-RPC error response 00:06:39.521 response: 00:06:39.521 { 00:06:39.521 "code": -32603, 00:06:39.521 "message": "Failed to claim CPU core: 2" 00:06:39.521 } 00:06:39.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59383 /var/tmp/spdk.sock 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59383 ']' 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59400 /var/tmp/spdk2.sock 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59400 ']' 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.521 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.782 00:06:39.782 real 0m3.046s 00:06:39.782 user 0m1.038s 00:06:39.782 sys 0m0.114s 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.782 15:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.782 ************************************ 00:06:39.782 END TEST locking_overlapped_coremask_via_rpc 00:06:39.782 ************************************ 00:06:39.782 15:54:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.782 15:54:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59383 ]] 00:06:39.782 15:54:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59383 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59383 ']' 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59383 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59383 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.782 killing process with pid 59383 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59383' 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59383 00:06:39.782 15:54:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59383 00:06:41.697 15:54:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59400 ]] 00:06:41.697 15:54:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59400 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59400 ']' 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59400 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59400 00:06:41.697 killing process with pid 59400 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59400' 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59400 00:06:41.697 15:54:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59400 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59383 ]] 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59383 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59383 ']' 00:06:43.080 Process with pid 59383 is not found 00:06:43.080 Process with pid 59400 is not found 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59383 00:06:43.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59383) - No such process 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59383 is not found' 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59400 ]] 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59400 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59400 ']' 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59400 00:06:43.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59400) - No such process 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59400 is not found' 00:06:43.080 15:54:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.080 ************************************ 00:06:43.080 END TEST cpu_locks 00:06:43.080 ************************************ 00:06:43.080 00:06:43.080 real 0m30.187s 00:06:43.080 user 0m54.378s 00:06:43.080 sys 0m4.393s 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.080 15:54:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 ************************************ 00:06:43.080 END TEST event 00:06:43.080 ************************************ 00:06:43.080 00:06:43.080 real 0m56.239s 00:06:43.080 user 1m46.841s 00:06:43.080 sys 0m7.464s 00:06:43.080 15:54:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.080 15:54:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 15:54:41 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.080 15:54:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.080 15:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.080 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 ************************************ 00:06:43.080 START TEST thread 00:06:43.080 ************************************ 00:06:43.080 15:54:41 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.080 * Looking for test storage... 00:06:43.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.080 15:54:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.080 15:54:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.080 15:54:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.080 15:54:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.080 15:54:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.080 15:54:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.080 15:54:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.080 15:54:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.080 15:54:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.080 15:54:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.080 15:54:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.081 15:54:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.081 15:54:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.081 15:54:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.081 15:54:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.081 15:54:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:43.081 15:54:41 thread -- scripts/common.sh@345 -- # : 1 00:06:43.081 15:54:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.081 15:54:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.081 15:54:41 thread -- scripts/common.sh@365 -- # decimal 1 00:06:43.081 15:54:41 thread -- scripts/common.sh@353 -- # local d=1 00:06:43.081 15:54:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.081 15:54:41 thread -- scripts/common.sh@355 -- # echo 1 00:06:43.081 15:54:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.081 15:54:41 thread -- scripts/common.sh@366 -- # decimal 2 00:06:43.081 15:54:41 thread -- scripts/common.sh@353 -- # local d=2 00:06:43.081 15:54:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.081 15:54:41 thread -- scripts/common.sh@355 -- # echo 2 00:06:43.081 15:54:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.081 15:54:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.081 15:54:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.081 15:54:41 thread -- scripts/common.sh@368 -- # return 0 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.081 --rc genhtml_branch_coverage=1 00:06:43.081 --rc genhtml_function_coverage=1 00:06:43.081 --rc genhtml_legend=1 00:06:43.081 --rc geninfo_all_blocks=1 00:06:43.081 --rc geninfo_unexecuted_blocks=1 00:06:43.081 00:06:43.081 ' 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.081 --rc genhtml_branch_coverage=1 00:06:43.081 --rc genhtml_function_coverage=1 00:06:43.081 --rc genhtml_legend=1 00:06:43.081 --rc geninfo_all_blocks=1 00:06:43.081 --rc geninfo_unexecuted_blocks=1 00:06:43.081 00:06:43.081 ' 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.081 --rc genhtml_branch_coverage=1 00:06:43.081 --rc genhtml_function_coverage=1 00:06:43.081 --rc genhtml_legend=1 00:06:43.081 --rc geninfo_all_blocks=1 00:06:43.081 --rc geninfo_unexecuted_blocks=1 00:06:43.081 00:06:43.081 ' 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.081 --rc genhtml_branch_coverage=1 00:06:43.081 --rc genhtml_function_coverage=1 00:06:43.081 --rc genhtml_legend=1 00:06:43.081 --rc geninfo_all_blocks=1 00:06:43.081 --rc geninfo_unexecuted_blocks=1 00:06:43.081 00:06:43.081 ' 00:06:43.081 15:54:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.081 15:54:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.081 ************************************ 00:06:43.081 START TEST thread_poller_perf 00:06:43.081 ************************************ 00:06:43.081 15:54:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.340 [2024-11-20 15:54:41.329999] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:43.340 [2024-11-20 15:54:41.330270] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:06:43.340 [2024-11-20 15:54:41.491947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.600 [2024-11-20 15:54:41.592631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.600 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.545 [2024-11-20T15:54:42.795Z] ====================================== 00:06:44.545 [2024-11-20T15:54:42.795Z] busy:2609835742 (cyc) 00:06:44.545 [2024-11-20T15:54:42.795Z] total_run_count: 302000 00:06:44.545 [2024-11-20T15:54:42.795Z] tsc_hz: 2600000000 (cyc) 00:06:44.545 [2024-11-20T15:54:42.795Z] ====================================== 00:06:44.545 [2024-11-20T15:54:42.796Z] poller_cost: 8641 (cyc), 3323 (nsec) 00:06:44.546 00:06:44.546 real 0m1.462s 00:06:44.546 user 0m1.282s 00:06:44.546 sys 0m0.070s 00:06:44.546 15:54:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.546 15:54:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.546 ************************************ 00:06:44.546 END TEST thread_poller_perf 00:06:44.546 ************************************ 00:06:44.810 15:54:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.810 15:54:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:44.810 15:54:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.810 15:54:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.810 ************************************ 00:06:44.810 START TEST thread_poller_perf 00:06:44.810 ************************************ 00:06:44.810 15:54:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.810 [2024-11-20 15:54:42.851221] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:44.810 [2024-11-20 15:54:42.851334] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:06:44.810 [2024-11-20 15:54:43.012774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.071 [2024-11-20 15:54:43.115555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.071 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.458 [2024-11-20T15:54:44.708Z] ====================================== 00:06:46.458 [2024-11-20T15:54:44.708Z] busy:2603557498 (cyc) 00:06:46.458 [2024-11-20T15:54:44.708Z] total_run_count: 3971000 00:06:46.458 [2024-11-20T15:54:44.708Z] tsc_hz: 2600000000 (cyc) 00:06:46.458 [2024-11-20T15:54:44.708Z] ====================================== 00:06:46.458 [2024-11-20T15:54:44.708Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:46.458 00:06:46.458 real 0m1.460s 00:06:46.458 user 0m1.274s 00:06:46.458 sys 0m0.078s 00:06:46.458 15:54:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.458 15:54:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.458 ************************************ 00:06:46.458 END TEST thread_poller_perf 00:06:46.458 ************************************ 00:06:46.458 15:54:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.458 00:06:46.458 real 0m3.183s 00:06:46.458 user 0m2.669s 00:06:46.458 sys 0m0.261s 00:06:46.458 15:54:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.458 15:54:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.458 ************************************ 00:06:46.458 END TEST thread 00:06:46.458 ************************************ 00:06:46.458 15:54:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:46.458 15:54:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.458 15:54:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.458 15:54:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.458 15:54:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.458 ************************************ 00:06:46.458 START TEST app_cmdline 00:06:46.458 ************************************ 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.458 * Looking for test storage... 00:06:46.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.458 15:54:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.458 --rc genhtml_branch_coverage=1 00:06:46.458 --rc genhtml_function_coverage=1 00:06:46.458 --rc genhtml_legend=1 00:06:46.458 --rc geninfo_all_blocks=1 00:06:46.458 --rc geninfo_unexecuted_blocks=1 00:06:46.458 00:06:46.458 ' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.458 --rc genhtml_branch_coverage=1 00:06:46.458 --rc genhtml_function_coverage=1 00:06:46.458 --rc genhtml_legend=1 00:06:46.458 --rc geninfo_all_blocks=1 00:06:46.458 --rc geninfo_unexecuted_blocks=1 00:06:46.458 00:06:46.458 ' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.458 --rc genhtml_branch_coverage=1 00:06:46.458 --rc genhtml_function_coverage=1 00:06:46.458 --rc genhtml_legend=1 00:06:46.458 --rc geninfo_all_blocks=1 00:06:46.458 --rc geninfo_unexecuted_blocks=1 00:06:46.458 00:06:46.458 ' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.458 --rc genhtml_branch_coverage=1 00:06:46.458 --rc genhtml_function_coverage=1 00:06:46.458 --rc genhtml_legend=1 00:06:46.458 --rc geninfo_all_blocks=1 00:06:46.458 --rc geninfo_unexecuted_blocks=1 00:06:46.458 00:06:46.458 ' 00:06:46.458 15:54:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.458 15:54:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59681 00:06:46.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.458 15:54:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59681 00:06:46.458 15:54:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59681 ']' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.458 15:54:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.458 [2024-11-20 15:54:44.613366] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:46.458 [2024-11-20 15:54:44.613482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:06:46.718 [2024-11-20 15:54:44.770902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.718 [2024-11-20 15:54:44.872966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.291 15:54:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.291 15:54:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:47.291 15:54:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:47.552 { 00:06:47.552 "version": "SPDK v25.01-pre git sha1 0728de5b0", 00:06:47.552 "fields": { 00:06:47.552 "major": 25, 00:06:47.552 "minor": 1, 00:06:47.552 "patch": 0, 00:06:47.552 "suffix": "-pre", 00:06:47.552 "commit": "0728de5b0" 00:06:47.552 } 00:06:47.552 } 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:47.552 15:54:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:47.552 15:54:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.813 request: 00:06:47.813 { 00:06:47.813 "method": "env_dpdk_get_mem_stats", 00:06:47.813 "req_id": 1 00:06:47.813 } 00:06:47.813 Got JSON-RPC error response 00:06:47.813 response: 00:06:47.813 { 00:06:47.813 "code": -32601, 00:06:47.813 "message": "Method not found" 00:06:47.813 } 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.813 15:54:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59681 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59681 ']' 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59681 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59681 00:06:47.813 killing process with pid 59681 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59681' 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59681 00:06:47.813 15:54:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59681 00:06:49.758 ************************************ 00:06:49.758 END TEST app_cmdline 00:06:49.758 ************************************ 00:06:49.758 00:06:49.758 real 0m3.106s 00:06:49.758 user 0m3.393s 00:06:49.758 sys 0m0.452s 00:06:49.758 15:54:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.758 15:54:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.758 15:54:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.758 15:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.758 15:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.758 15:54:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.758 ************************************ 00:06:49.758 START TEST version 00:06:49.758 ************************************ 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.758 * Looking for test storage... 00:06:49.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.758 15:54:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.758 15:54:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.758 15:54:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.758 15:54:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.758 15:54:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.758 15:54:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.758 15:54:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.758 15:54:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.758 15:54:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.758 15:54:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.758 15:54:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.758 15:54:47 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.758 15:54:47 version -- scripts/common.sh@345 -- # : 1 00:06:49.758 15:54:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.758 15:54:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.758 15:54:47 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.758 15:54:47 version -- scripts/common.sh@353 -- # local d=1 00:06:49.758 15:54:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.758 15:54:47 version -- scripts/common.sh@355 -- # echo 1 00:06:49.758 15:54:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.758 15:54:47 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.758 15:54:47 version -- scripts/common.sh@353 -- # local d=2 00:06:49.758 15:54:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.758 15:54:47 version -- scripts/common.sh@355 -- # echo 2 00:06:49.758 15:54:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.758 15:54:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.758 15:54:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.758 15:54:47 version -- scripts/common.sh@368 -- # return 0 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.758 --rc genhtml_branch_coverage=1 00:06:49.758 --rc genhtml_function_coverage=1 00:06:49.758 --rc genhtml_legend=1 00:06:49.758 --rc geninfo_all_blocks=1 00:06:49.758 --rc geninfo_unexecuted_blocks=1 00:06:49.758 00:06:49.758 ' 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.758 --rc genhtml_branch_coverage=1 00:06:49.758 --rc genhtml_function_coverage=1 00:06:49.758 --rc genhtml_legend=1 00:06:49.758 --rc geninfo_all_blocks=1 00:06:49.758 --rc geninfo_unexecuted_blocks=1 00:06:49.758 00:06:49.758 ' 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.758 --rc genhtml_branch_coverage=1 00:06:49.758 --rc genhtml_function_coverage=1 00:06:49.758 --rc genhtml_legend=1 00:06:49.758 --rc geninfo_all_blocks=1 00:06:49.758 --rc geninfo_unexecuted_blocks=1 00:06:49.758 00:06:49.758 ' 00:06:49.758 15:54:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.758 --rc genhtml_branch_coverage=1 00:06:49.758 --rc genhtml_function_coverage=1 00:06:49.758 --rc genhtml_legend=1 00:06:49.758 --rc geninfo_all_blocks=1 00:06:49.758 --rc geninfo_unexecuted_blocks=1 00:06:49.758 00:06:49.758 ' 00:06:49.758 15:54:47 version -- app/version.sh@17 -- # get_header_version major 00:06:49.758 15:54:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # cut -f2 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.758 15:54:47 version -- app/version.sh@17 -- # major=25 00:06:49.758 15:54:47 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.758 15:54:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # cut -f2 00:06:49.758 15:54:47 version -- app/version.sh@18 -- # minor=1 00:06:49.758 15:54:47 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # cut -f2 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.758 15:54:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.758 15:54:47 version -- app/version.sh@19 -- # patch=0 00:06:49.758 15:54:47 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.758 15:54:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # cut -f2 00:06:49.758 15:54:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.758 15:54:47 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.758 15:54:47 version -- app/version.sh@22 -- # version=25.1 00:06:49.758 15:54:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.758 15:54:47 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.758 15:54:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.758 15:54:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.758 15:54:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.758 15:54:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.758 00:06:49.758 real 0m0.217s 00:06:49.758 user 0m0.127s 00:06:49.758 sys 0m0.112s 00:06:49.759 ************************************ 00:06:49.759 END TEST version 00:06:49.759 ************************************ 00:06:49.759 15:54:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.759 15:54:47 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.759 15:54:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.759 15:54:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:49.759 15:54:47 -- spdk/autotest.sh@194 -- # uname -s 00:06:49.759 15:54:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:49.759 15:54:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.759 15:54:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.759 15:54:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:49.759 15:54:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:49.759 15:54:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.759 15:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.759 15:54:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.759 ************************************ 00:06:49.759 START TEST blockdev_nvme 00:06:49.759 ************************************ 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:49.759 * Looking for test storage... 00:06:49.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.759 15:54:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.759 --rc genhtml_branch_coverage=1 00:06:49.759 --rc genhtml_function_coverage=1 00:06:49.759 --rc genhtml_legend=1 00:06:49.759 --rc geninfo_all_blocks=1 00:06:49.759 --rc geninfo_unexecuted_blocks=1 00:06:49.759 00:06:49.759 ' 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.759 --rc genhtml_branch_coverage=1 00:06:49.759 --rc genhtml_function_coverage=1 00:06:49.759 --rc genhtml_legend=1 00:06:49.759 --rc geninfo_all_blocks=1 00:06:49.759 --rc geninfo_unexecuted_blocks=1 00:06:49.759 00:06:49.759 ' 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.759 --rc genhtml_branch_coverage=1 00:06:49.759 --rc genhtml_function_coverage=1 00:06:49.759 --rc genhtml_legend=1 00:06:49.759 --rc geninfo_all_blocks=1 00:06:49.759 --rc geninfo_unexecuted_blocks=1 00:06:49.759 00:06:49.759 ' 00:06:49.759 15:54:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.759 --rc genhtml_branch_coverage=1 00:06:49.759 --rc genhtml_function_coverage=1 00:06:49.759 --rc genhtml_legend=1 00:06:49.759 --rc geninfo_all_blocks=1 00:06:49.759 --rc geninfo_unexecuted_blocks=1 00:06:49.759 00:06:49.759 ' 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:49.759 15:54:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:49.759 15:54:47 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:49.759 15:54:48 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59859 00:06:49.759 15:54:48 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:49.759 15:54:48 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59859 00:06:49.759 15:54:48 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59859 ']' 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.759 15:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.021 [2024-11-20 15:54:48.075252] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:50.021 [2024-11-20 15:54:48.075520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:06:50.282 [2024-11-20 15:54:48.280043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.282 [2024-11-20 15:54:48.415472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.855 15:54:49 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.856 15:54:49 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:50.856 15:54:49 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:50.856 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.856 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.433 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.433 15:54:49 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:51.433 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 15:54:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:51.434 15:54:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:51.435 15:54:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "23fd31e7-9a33-4a07-882f-160a6abf2839"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "23fd31e7-9a33-4a07-882f-160a6abf2839",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2503e050-c3fb-41c0-affa-529ae585d413"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2503e050-c3fb-41c0-affa-529ae585d413",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "855babdc-a285-4b4d-9b0f-20c90117f08a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "855babdc-a285-4b4d-9b0f-20c90117f08a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "45b9864a-a9c8-4ee6-a167-76e87d3d66cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "45b9864a-a9c8-4ee6-a167-76e87d3d66cf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ae9c61ab-e160-4c4e-ac19-a39d81752f29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae9c61ab-e160-4c4e-ac19-a39d81752f29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "65a90954-0570-4539-bdea-b37120255566"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "65a90954-0570-4539-bdea-b37120255566",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:51.435 15:54:49 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:51.435 15:54:49 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:51.435 15:54:49 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:51.435 15:54:49 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59859 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59859 ']' 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59859 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59859 00:06:51.435 killing process with pid 59859 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59859' 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59859 00:06:51.435 15:54:49 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59859 00:06:52.820 15:54:51 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.820 15:54:51 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.820 15:54:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:52.820 15:54:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.820 15:54:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.080 ************************************ 00:06:53.080 START TEST bdev_hello_world 00:06:53.080 ************************************ 00:06:53.080 15:54:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:53.080 [2024-11-20 15:54:51.144741] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:53.080 [2024-11-20 15:54:51.145030] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:06:53.080 [2024-11-20 15:54:51.303996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.341 [2024-11-20 15:54:51.406487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.912 [2024-11-20 15:54:51.951864] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:53.912 [2024-11-20 15:54:51.951918] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:53.912 [2024-11-20 15:54:51.951944] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:53.912 [2024-11-20 15:54:51.954442] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:53.912 [2024-11-20 15:54:51.955255] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:53.912 [2024-11-20 15:54:51.955279] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:53.912 [2024-11-20 15:54:51.956274] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:53.912 00:06:53.912 [2024-11-20 15:54:51.956328] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:54.484 ************************************ 00:06:54.484 END TEST bdev_hello_world 00:06:54.484 00:06:54.484 real 0m1.600s 00:06:54.484 user 0m1.321s 00:06:54.484 sys 0m0.171s 00:06:54.484 15:54:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.484 15:54:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:54.484 ************************************ 00:06:54.745 15:54:52 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:54.745 15:54:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.745 15:54:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.745 15:54:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.745 ************************************ 00:06:54.745 START TEST bdev_bounds 00:06:54.745 ************************************ 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59979 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.745 Process bdevio pid: 59979 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59979' 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59979 00:06:54.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59979 ']' 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.745 15:54:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.745 [2024-11-20 15:54:52.810873] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:54.745 [2024-11-20 15:54:52.810992] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:06:54.745 [2024-11-20 15:54:52.973829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.004 [2024-11-20 15:54:53.079815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.004 [2024-11-20 15:54:53.080062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.004 [2024-11-20 15:54:53.080145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.575 15:54:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.576 15:54:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:55.576 15:54:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:55.576 I/O targets: 00:06:55.576 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:55.576 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:55.576 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.576 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.576 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.576 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:55.576 00:06:55.576 00:06:55.576 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.576 http://cunit.sourceforge.net/ 00:06:55.576 00:06:55.576 00:06:55.576 Suite: bdevio tests on: Nvme3n1 00:06:55.576 Test: blockdev write read block ...passed 00:06:55.576 Test: blockdev write zeroes read block ...passed 00:06:55.576 Test: blockdev write zeroes read no split ...passed 00:06:55.576 Test: blockdev write zeroes read split ...passed 00:06:55.576 Test: blockdev write zeroes read split partial ...passed 00:06:55.576 Test: blockdev reset ...[2024-11-20 15:54:53.807680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:55.576 [2024-11-20 15:54:53.810550] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:55.576 Test: blockdev write read 8 blocks ...uccessful. 00:06:55.576 passed 00:06:55.576 Test: blockdev write read size > 128k ...passed 00:06:55.576 Test: blockdev write read invalid size ...passed 00:06:55.576 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.576 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.576 Test: blockdev write read max offset ...passed 00:06:55.576 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.576 Test: blockdev writev readv 8 blocks ...passed 00:06:55.576 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.576 Test: blockdev writev readv block ...passed 00:06:55.576 Test: blockdev writev readv size > 128k ...passed 00:06:55.576 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.576 Test: blockdev comparev and writev ...[2024-11-20 15:54:53.820661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0a0a000 len:0x1000 00:06:55.576 [2024-11-20 15:54:53.820826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.576 passed 00:06:55.576 Test: blockdev nvme passthru rw ...passed 00:06:55.576 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:54:53.823324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.834 [2024-11-20 15:54:53.823456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:55.834 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:55.834 passed 00:06:55.834 Test: blockdev copy ...passed 00:06:55.834 Suite: bdevio tests on: Nvme2n3 00:06:55.834 Test: blockdev write read block ...passed 00:06:55.834 Test: blockdev write zeroes read block ...passed 00:06:55.834 Test: blockdev write zeroes read no split ...passed 00:06:55.834 Test: blockdev write zeroes read split ...passed 00:06:55.834 Test: blockdev write zeroes read split partial ...passed 00:06:55.834 Test: blockdev reset ...[2024-11-20 15:54:53.879840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.834 passed 00:06:55.835 Test: blockdev write read 8 blocks ...[2024-11-20 15:54:53.882929] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.835 passed 00:06:55.835 Test: blockdev write read size > 128k ...passed 00:06:55.835 Test: blockdev write read invalid size ...passed 00:06:55.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.835 Test: blockdev write read max offset ...passed 00:06:55.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.835 Test: blockdev writev readv 8 blocks ...passed 00:06:55.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.835 Test: blockdev writev readv block ...passed 00:06:55.835 Test: blockdev writev readv size > 128k ...passed 00:06:55.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.835 Test: blockdev comparev and writev ...[2024-11-20 15:54:53.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:55.835 Test: blockdev nvme passthru rw ...passed 00:06:55.835 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2b4e06000 len:0x1000 00:06:55.835 [2024-11-20 15:54:53.889570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.835 [2024-11-20 15:54:53.890115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.835 [2024-11-20 15:54:53.890143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.835 passed 00:06:55.835 Test: blockdev nvme admin passthru ...passed 00:06:55.835 Test: blockdev copy ...passed 00:06:55.835 Suite: bdevio tests on: Nvme2n2 00:06:55.835 Test: blockdev write read block ...passed 00:06:55.835 Test: blockdev write zeroes read block ...passed 00:06:55.835 Test: blockdev write zeroes read no split ...passed 00:06:55.835 Test: blockdev write zeroes read split ...passed 00:06:55.835 Test: blockdev write zeroes read split partial ...passed 00:06:55.835 Test: blockdev reset ...[2024-11-20 15:54:53.950571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.835 passed 00:06:55.835 Test: blockdev write read 8 blocks ...[2024-11-20 15:54:53.953478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.835 passed 00:06:55.835 Test: blockdev write read size > 128k ...passed 00:06:55.835 Test: blockdev write read invalid size ...passed 00:06:55.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.835 Test: blockdev write read max offset ...passed 00:06:55.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.835 Test: blockdev writev readv 8 blocks ...passed 00:06:55.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.835 Test: blockdev writev readv block ...passed 00:06:55.835 Test: blockdev writev readv size > 128k ...passed 00:06:55.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.835 Test: blockdev comparev and writev ...[2024-11-20 15:54:53.962953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d123c000 len:0x1000 00:06:55.835 [2024-11-20 15:54:53.963075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.835 passed 00:06:55.835 Test: blockdev nvme passthru rw ...passed 00:06:55.835 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:54:53.964035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:55.835 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:55.835 [2024-11-20 15:54:53.964340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.835 passed 00:06:55.835 Test: blockdev copy ...passed 00:06:55.835 Suite: bdevio tests on: Nvme2n1 00:06:55.835 Test: blockdev write read block ...passed 00:06:55.835 Test: blockdev write zeroes read block ...passed 00:06:55.835 Test: blockdev write zeroes read no split ...passed 00:06:55.835 Test: blockdev write zeroes read split ...passed 00:06:55.835 Test: blockdev write zeroes read split partial ...passed 00:06:55.835 Test: blockdev reset ...[2024-11-20 15:54:54.019475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.835 [2024-11-20 15:54:54.023859] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.835 passed 00:06:55.835 Test: blockdev write read 8 blocks ...passed 00:06:55.835 Test: blockdev write read size > 128k ...passed 00:06:55.835 Test: blockdev write read invalid size ...passed 00:06:55.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.835 Test: blockdev write read max offset ...passed 00:06:55.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.835 Test: blockdev writev readv 8 blocks ...passed 00:06:55.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.835 Test: blockdev writev readv block ...passed 00:06:55.835 Test: blockdev writev readv size > 128k ...passed 00:06:55.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.835 Test: blockdev comparev and writev ...[2024-11-20 15:54:54.030813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1238000 len:0x1000 00:06:55.835 [2024-11-20 15:54:54.030949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.835 passed 00:06:55.835 Test: blockdev nvme passthru rw ...passed 00:06:55.835 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:54:54.031538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.835 [2024-11-20 15:54:54.031627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:55.835 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:55.835 passed 00:06:55.835 Test: blockdev copy ...passed 00:06:55.835 Suite: bdevio tests on: Nvme1n1 00:06:55.835 Test: blockdev write read block ...passed 00:06:55.835 Test: blockdev write zeroes read block ...passed 00:06:55.835 Test: blockdev write zeroes read no split ...passed 00:06:55.835 Test: blockdev write zeroes read split ...passed 00:06:56.094 Test: blockdev write zeroes read split partial ...passed 00:06:56.094 Test: blockdev reset ...[2024-11-20 15:54:54.085449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:56.094 passed 00:06:56.094 Test: blockdev write read 8 blocks ...[2024-11-20 15:54:54.088240] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:56.094 passed 00:06:56.094 Test: blockdev write read size > 128k ...passed 00:06:56.094 Test: blockdev write read invalid size ...passed 00:06:56.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.094 Test: blockdev write read max offset ...passed 00:06:56.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.094 Test: blockdev writev readv 8 blocks ...passed 00:06:56.094 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.094 Test: blockdev writev readv block ...passed 00:06:56.094 Test: blockdev writev readv size > 128k ...passed 00:06:56.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.094 Test: blockdev comparev and writev ...[2024-11-20 15:54:54.093677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1234000 len:0x1000 00:06:56.094 [2024-11-20 15:54:54.093718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.094 passed 00:06:56.094 Test: blockdev nvme passthru rw ...passed 00:06:56.094 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:54:54.094347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:56.094 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:56.094 [2024-11-20 15:54:54.094470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:56.094 passed 00:06:56.094 Test: blockdev copy ...passed 00:06:56.094 Suite: bdevio tests on: Nvme0n1 00:06:56.094 Test: blockdev write read block ...passed 00:06:56.094 Test: blockdev write zeroes read block ...passed 00:06:56.094 Test: blockdev write zeroes read no split ...passed 00:06:56.094 Test: blockdev write zeroes read split ...passed 00:06:56.094 Test: blockdev write zeroes read split partial ...passed 00:06:56.094 Test: blockdev reset ...[2024-11-20 15:54:54.136718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:56.094 [2024-11-20 15:54:54.139554] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:56.094 passed 00:06:56.094 Test: blockdev write read 8 blocks ...passed 00:06:56.094 Test: blockdev write read size > 128k ...passed 00:06:56.094 Test: blockdev write read invalid size ...passed 00:06:56.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.094 Test: blockdev write read max offset ...passed 00:06:56.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.094 Test: blockdev writev readv 8 blocks ...passed 00:06:56.094 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.094 Test: blockdev writev readv block ...passed 00:06:56.094 Test: blockdev writev readv size > 128k ...passed 00:06:56.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.094 Test: blockdev comparev and writev ...[2024-11-20 15:54:54.145348] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:56.094 separate metadata which is not supported yet. 00:06:56.094 passed 00:06:56.094 Test: blockdev nvme passthru rw ...passed 00:06:56.094 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:54:54.146004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:56.094 [2024-11-20 15:54:54.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:56.094 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:56.094 passed 00:06:56.094 Test: blockdev copy ...passed 00:06:56.094 00:06:56.094 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.094 suites 6 6 n/a 0 0 00:06:56.094 tests 138 138 138 0 0 00:06:56.094 asserts 893 893 893 0 n/a 00:06:56.094 00:06:56.094 Elapsed time = 1.017 seconds 00:06:56.094 0 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59979 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59979 ']' 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59979 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59979 00:06:56.094 killing process with pid 59979 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59979' 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59979 00:06:56.094 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59979 00:06:56.658 15:54:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:56.658 00:06:56.658 real 0m2.121s 00:06:56.658 user 0m5.374s 00:06:56.658 sys 0m0.296s 00:06:56.658 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.658 ************************************ 00:06:56.658 END TEST bdev_bounds 00:06:56.658 ************************************ 00:06:56.658 15:54:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.658 15:54:54 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.658 15:54:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.658 15:54:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.659 15:54:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.659 ************************************ 00:06:56.659 START TEST bdev_nbd 00:06:56.659 ************************************ 00:06:56.659 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.659 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:56.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60039 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60039 /var/tmp/spdk-nbd.sock 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60039 ']' 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:56.931 15:54:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.931 [2024-11-20 15:54:54.974592] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:56.931 [2024-11-20 15:54:54.974884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.931 [2024-11-20 15:54:55.136535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.241 [2024-11-20 15:54:55.235781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:57.805 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:57.806 15:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.806 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.063 1+0 records in 00:06:58.063 1+0 records out 00:06:58.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422477 s, 9.7 MB/s 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.063 1+0 records in 00:06:58.063 1+0 records out 00:06:58.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408247 s, 10.0 MB/s 00:06:58.063 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.064 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.320 1+0 records in 00:06:58.320 1+0 records out 00:06:58.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034191 s, 12.0 MB/s 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.320 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.578 1+0 records in 00:06:58.578 1+0 records out 00:06:58.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038709 s, 10.6 MB/s 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.578 15:54:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.836 1+0 records in 00:06:58.836 1+0 records out 00:06:58.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426029 s, 9.6 MB/s 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.836 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.837 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.094 1+0 records in 00:06:59.094 1+0 records out 00:06:59.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468949 s, 8.7 MB/s 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.094 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd0", 00:06:59.351 "bdev_name": "Nvme0n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd1", 00:06:59.351 "bdev_name": "Nvme1n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd2", 00:06:59.351 "bdev_name": "Nvme2n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd3", 00:06:59.351 "bdev_name": "Nvme2n2" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd4", 00:06:59.351 "bdev_name": "Nvme2n3" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd5", 00:06:59.351 "bdev_name": "Nvme3n1" 00:06:59.351 } 00:06:59.351 ]' 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd0", 00:06:59.351 "bdev_name": "Nvme0n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd1", 00:06:59.351 "bdev_name": "Nvme1n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd2", 00:06:59.351 "bdev_name": "Nvme2n1" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd3", 00:06:59.351 "bdev_name": "Nvme2n2" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd4", 00:06:59.351 "bdev_name": "Nvme2n3" 00:06:59.351 }, 00:06:59.351 { 00:06:59.351 "nbd_device": "/dev/nbd5", 00:06:59.351 "bdev_name": "Nvme3n1" 00:06:59.351 } 00:06:59.351 ]' 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.351 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.608 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.866 15:54:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.125 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.383 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.641 15:54:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:00.899 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:01.185 /dev/nbd0 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.185 1+0 records in 00:07:01.185 1+0 records out 00:07:01.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367163 s, 11.2 MB/s 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:01.185 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:01.472 /dev/nbd1 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.472 1+0 records in 00:07:01.472 1+0 records out 00:07:01.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387767 s, 10.6 MB/s 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:01.472 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:01.729 /dev/nbd10 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.729 1+0 records in 00:07:01.729 1+0 records out 00:07:01.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457933 s, 8.9 MB/s 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:01.729 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:01.729 /dev/nbd11 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.986 15:54:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.986 1+0 records in 00:07:01.986 1+0 records out 00:07:01.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365351 s, 11.2 MB/s 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:01.986 /dev/nbd12 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.986 1+0 records in 00:07:01.986 1+0 records out 00:07:01.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387381 s, 10.6 MB/s 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.986 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:02.243 /dev/nbd13 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.243 1+0 records in 00:07:02.243 1+0 records out 00:07:02.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345978 s, 11.8 MB/s 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.243 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd0", 00:07:02.501 "bdev_name": "Nvme0n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd1", 00:07:02.501 "bdev_name": "Nvme1n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd10", 00:07:02.501 "bdev_name": "Nvme2n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd11", 00:07:02.501 "bdev_name": "Nvme2n2" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd12", 00:07:02.501 "bdev_name": "Nvme2n3" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd13", 00:07:02.501 "bdev_name": "Nvme3n1" 00:07:02.501 } 00:07:02.501 ]' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd0", 00:07:02.501 "bdev_name": "Nvme0n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd1", 00:07:02.501 "bdev_name": "Nvme1n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd10", 00:07:02.501 "bdev_name": "Nvme2n1" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd11", 00:07:02.501 "bdev_name": "Nvme2n2" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd12", 00:07:02.501 "bdev_name": "Nvme2n3" 00:07:02.501 }, 00:07:02.501 { 00:07:02.501 "nbd_device": "/dev/nbd13", 00:07:02.501 "bdev_name": "Nvme3n1" 00:07:02.501 } 00:07:02.501 ]' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.501 /dev/nbd1 00:07:02.501 /dev/nbd10 00:07:02.501 /dev/nbd11 00:07:02.501 /dev/nbd12 00:07:02.501 /dev/nbd13' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.501 /dev/nbd1 00:07:02.501 /dev/nbd10 00:07:02.501 /dev/nbd11 00:07:02.501 /dev/nbd12 00:07:02.501 /dev/nbd13' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:02.501 256+0 records in 00:07:02.501 256+0 records out 00:07:02.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623559 s, 168 MB/s 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.501 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.758 256+0 records in 00:07:02.758 256+0 records out 00:07:02.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589441 s, 17.8 MB/s 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.759 256+0 records in 00:07:02.759 256+0 records out 00:07:02.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0626402 s, 16.7 MB/s 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:02.759 256+0 records in 00:07:02.759 256+0 records out 00:07:02.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0663769 s, 15.8 MB/s 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:02.759 256+0 records in 00:07:02.759 256+0 records out 00:07:02.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0633513 s, 16.6 MB/s 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.759 15:55:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:03.016 256+0 records in 00:07:03.016 256+0 records out 00:07:03.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638942 s, 16.4 MB/s 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:03.016 256+0 records in 00:07:03.016 256+0 records out 00:07:03.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667052 s, 15.7 MB/s 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.016 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.275 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.542 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.803 15:55:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.803 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:03.804 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.804 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.804 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.804 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.061 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.320 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:04.578 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:04.836 malloc_lvol_verify 00:07:04.836 15:55:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:05.094 80edd6cc-49f8-4c78-af1a-dcaf2274cc5b 00:07:05.094 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:05.094 b3a2c378-54d7-4ff1-96b7-c3ed630c1ea7 00:07:05.094 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:05.352 /dev/nbd0 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:05.352 mke2fs 1.47.0 (5-Feb-2023) 00:07:05.352 Discarding device blocks: 0/4096 done 00:07:05.352 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:05.352 00:07:05.352 Allocating group tables: 0/1 done 00:07:05.352 Writing inode tables: 0/1 done 00:07:05.352 Creating journal (1024 blocks): done 00:07:05.352 Writing superblocks and filesystem accounting information: 0/1 done 00:07:05.352 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.352 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60039 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60039 ']' 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60039 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60039 00:07:05.609 killing process with pid 60039 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60039' 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60039 00:07:05.609 15:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60039 00:07:06.542 ************************************ 00:07:06.542 END TEST bdev_nbd 00:07:06.542 ************************************ 00:07:06.542 15:55:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:06.542 00:07:06.542 real 0m9.684s 00:07:06.542 user 0m13.969s 00:07:06.542 sys 0m3.042s 00:07:06.542 15:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.542 15:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:06.542 15:55:04 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:06.542 15:55:04 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:06.542 skipping fio tests on NVMe due to multi-ns failures. 00:07:06.542 15:55:04 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:06.542 15:55:04 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:06.542 15:55:04 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.542 15:55:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:06.542 15:55:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.542 15:55:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.542 ************************************ 00:07:06.542 START TEST bdev_verify 00:07:06.542 ************************************ 00:07:06.542 15:55:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.542 [2024-11-20 15:55:04.690655] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:06.542 [2024-11-20 15:55:04.690792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:07:06.800 [2024-11-20 15:55:04.852297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.800 [2024-11-20 15:55:04.951223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.800 [2024-11-20 15:55:04.951425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.369 Running I/O for 5 seconds... 00:07:09.674 23360.00 IOPS, 91.25 MiB/s [2024-11-20T15:55:08.857Z] 23360.00 IOPS, 91.25 MiB/s [2024-11-20T15:55:09.794Z] 24064.00 IOPS, 94.00 MiB/s [2024-11-20T15:55:10.728Z] 23424.00 IOPS, 91.50 MiB/s [2024-11-20T15:55:10.728Z] 23680.00 IOPS, 92.50 MiB/s 00:07:12.478 Latency(us) 00:07:12.478 [2024-11-20T15:55:10.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.478 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0xbd0bd 00:07:12.478 Nvme0n1 : 5.08 1928.27 7.53 0.00 0.00 66008.85 12451.84 72190.42 00:07:12.478 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:12.478 Nvme0n1 : 5.07 1955.17 7.64 0.00 0.00 65150.00 10788.23 73400.32 00:07:12.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0xa0000 00:07:12.478 Nvme1n1 : 5.09 1934.77 7.56 0.00 0.00 65843.27 9880.81 66140.95 00:07:12.478 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0xa0000 length 0xa0000 00:07:12.478 Nvme1n1 : 5.08 1954.49 7.63 0.00 0.00 65075.01 9931.22 70980.53 00:07:12.478 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0x80000 00:07:12.478 Nvme2n1 : 5.10 1934.12 7.56 0.00 0.00 65744.77 10485.76 66544.25 00:07:12.478 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x80000 length 0x80000 00:07:12.478 Nvme2n1 : 5.08 1953.63 7.63 0.00 0.00 64978.62 10939.47 70980.53 00:07:12.478 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0x80000 00:07:12.478 Nvme2n2 : 5.10 1933.61 7.55 0.00 0.00 65621.64 10637.00 68157.44 00:07:12.478 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x80000 length 0x80000 00:07:12.478 Nvme2n2 : 5.09 1961.28 7.66 0.00 0.00 64771.77 9225.45 67754.14 00:07:12.478 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0x80000 00:07:12.478 Nvme2n3 : 5.10 1933.10 7.55 0.00 0.00 65504.79 10889.06 69770.63 00:07:12.478 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x80000 length 0x80000 00:07:12.478 Nvme2n3 : 5.09 1960.71 7.66 0.00 0.00 64648.63 9527.93 70577.23 00:07:12.478 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x0 length 0x20000 00:07:12.478 Nvme3n1 : 5.10 1932.58 7.55 0.00 0.00 65401.18 10334.52 69770.63 00:07:12.478 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.478 Verification LBA range: start 0x20000 length 0x20000 00:07:12.478 Nvme3n1 : 5.09 1960.17 7.66 0.00 0.00 64540.02 9578.34 73803.62 00:07:12.478 [2024-11-20T15:55:10.728Z] =================================================================================================================== 00:07:12.478 [2024-11-20T15:55:10.728Z] Total : 23341.88 91.18 0.00 0.00 65271.31 9225.45 73803.62 00:07:13.909 00:07:13.909 real 0m7.498s 00:07:13.909 user 0m14.072s 00:07:13.909 sys 0m0.226s 00:07:13.909 15:55:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.909 15:55:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.909 ************************************ 00:07:13.909 END TEST bdev_verify 00:07:13.909 ************************************ 00:07:14.168 15:55:12 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.168 15:55:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:14.168 15:55:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.168 15:55:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.168 ************************************ 00:07:14.168 START TEST bdev_verify_big_io 00:07:14.168 ************************************ 00:07:14.168 15:55:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.168 [2024-11-20 15:55:12.239192] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:14.168 [2024-11-20 15:55:12.239317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:07:14.168 [2024-11-20 15:55:12.397519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.425 [2024-11-20 15:55:12.507371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.425 [2024-11-20 15:55:12.507388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.989 Running I/O for 5 seconds... 00:07:19.485 327.00 IOPS, 20.44 MiB/s [2024-11-20T15:55:17.995Z] 1648.50 IOPS, 103.03 MiB/s [2024-11-20T15:55:19.384Z] 1440.00 IOPS, 90.00 MiB/s [2024-11-20T15:55:19.384Z] 1730.75 IOPS, 108.17 MiB/s 00:07:21.134 Latency(us) 00:07:21.134 [2024-11-20T15:55:19.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.134 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0x0 length 0xbd0b 00:07:21.134 Nvme0n1 : 5.83 104.54 6.53 0.00 0.00 1151911.30 20568.22 1161499.57 00:07:21.134 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:21.134 Nvme0n1 : 5.73 111.71 6.98 0.00 0.00 1103857.59 15022.87 1180857.90 00:07:21.134 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0x0 length 0xa000 00:07:21.134 Nvme1n1 : 5.83 109.81 6.86 0.00 0.00 1086671.01 102034.51 1025991.29 00:07:21.134 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0xa000 length 0xa000 00:07:21.134 Nvme1n1 : 5.73 111.67 6.98 0.00 0.00 1066454.72 107277.39 974369.08 00:07:21.134 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0x0 length 0x8000 00:07:21.134 Nvme2n1 : 5.83 109.75 6.86 0.00 0.00 1050230.15 139541.27 1064707.94 00:07:21.134 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.134 Verification LBA range: start 0x8000 length 0x8000 00:07:21.134 Nvme2n1 : 5.82 113.65 7.10 0.00 0.00 1010996.14 89532.26 890483.00 00:07:21.135 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x0 length 0x8000 00:07:21.135 Nvme2n2 : 5.96 118.13 7.38 0.00 0.00 952291.50 33473.77 1084066.26 00:07:21.135 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x8000 length 0x8000 00:07:21.135 Nvme2n2 : 5.93 116.38 7.27 0.00 0.00 956512.31 55251.89 1400252.26 00:07:21.135 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x0 length 0x8000 00:07:21.135 Nvme2n3 : 6.03 122.76 7.67 0.00 0.00 882455.73 34683.67 1122782.92 00:07:21.135 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x8000 length 0x8000 00:07:21.135 Nvme2n3 : 5.96 119.26 7.45 0.00 0.00 909097.13 30045.74 2013265.92 00:07:21.135 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x0 length 0x2000 00:07:21.135 Nvme3n1 : 6.04 137.82 8.61 0.00 0.00 763979.57 1052.36 1161499.57 00:07:21.135 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.135 Verification LBA range: start 0x2000 length 0x2000 00:07:21.135 Nvme3n1 : 6.03 140.99 8.81 0.00 0.00 746070.08 693.17 2064888.12 00:07:21.135 [2024-11-20T15:55:19.385Z] =================================================================================================================== 00:07:21.135 [2024-11-20T15:55:19.385Z] Total : 1416.49 88.53 0.00 0.00 960396.34 693.17 2064888.12 00:07:23.038 00:07:23.038 real 0m9.009s 00:07:23.038 user 0m17.083s 00:07:23.038 sys 0m0.228s 00:07:23.038 15:55:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.038 ************************************ 00:07:23.038 END TEST bdev_verify_big_io 00:07:23.038 ************************************ 00:07:23.038 15:55:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:23.038 15:55:21 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.038 15:55:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.038 15:55:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.038 15:55:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.038 ************************************ 00:07:23.038 START TEST bdev_write_zeroes 00:07:23.038 ************************************ 00:07:23.038 15:55:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.299 [2024-11-20 15:55:21.321000] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:23.299 [2024-11-20 15:55:21.321124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:07:23.299 [2024-11-20 15:55:21.480202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.561 [2024-11-20 15:55:21.582340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.130 Running I/O for 1 seconds... 00:07:25.073 53927.00 IOPS, 210.65 MiB/s 00:07:25.073 Latency(us) 00:07:25.073 [2024-11-20T15:55:23.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.073 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.073 Nvme0n1 : 1.02 8842.15 34.54 0.00 0.00 14442.73 4637.93 39523.25 00:07:25.073 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.073 Nvme1n1 : 1.02 9054.13 35.37 0.00 0.00 14092.45 6704.84 22080.59 00:07:25.073 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.073 Nvme2n1 : 1.02 9007.68 35.19 0.00 0.00 14089.45 8116.38 21173.17 00:07:25.073 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.073 Nvme2n2 : 1.02 8997.41 35.15 0.00 0.00 14064.17 6125.10 20164.92 00:07:25.074 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.074 Nvme2n3 : 1.03 8987.19 35.11 0.00 0.00 14047.12 5091.64 20769.87 00:07:25.074 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.074 Nvme3n1 : 1.03 8914.74 34.82 0.00 0.00 14138.58 8620.50 22080.59 00:07:25.074 [2024-11-20T15:55:23.324Z] =================================================================================================================== 00:07:25.074 [2024-11-20T15:55:23.324Z] Total : 53803.29 210.17 0.00 0.00 14144.70 4637.93 39523.25 00:07:26.017 00:07:26.017 real 0m2.684s 00:07:26.017 user 0m2.389s 00:07:26.017 sys 0m0.177s 00:07:26.017 15:55:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.017 15:55:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:26.017 ************************************ 00:07:26.017 END TEST bdev_write_zeroes 00:07:26.017 ************************************ 00:07:26.017 15:55:23 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.017 15:55:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.017 15:55:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.017 15:55:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.017 ************************************ 00:07:26.017 START TEST bdev_json_nonenclosed 00:07:26.017 ************************************ 00:07:26.017 15:55:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.017 [2024-11-20 15:55:24.071113] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:26.017 [2024-11-20 15:55:24.071237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:07:26.017 [2024-11-20 15:55:24.230388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.277 [2024-11-20 15:55:24.332572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.277 [2024-11-20 15:55:24.332660] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:26.277 [2024-11-20 15:55:24.332677] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.277 [2024-11-20 15:55:24.332686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.277 00:07:26.277 real 0m0.512s 00:07:26.277 user 0m0.309s 00:07:26.277 sys 0m0.098s 00:07:26.277 15:55:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.277 ************************************ 00:07:26.277 END TEST bdev_json_nonenclosed 00:07:26.277 ************************************ 00:07:26.277 15:55:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:26.537 15:55:24 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.537 15:55:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.537 15:55:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.537 15:55:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.537 ************************************ 00:07:26.537 START TEST bdev_json_nonarray 00:07:26.537 ************************************ 00:07:26.537 15:55:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.537 [2024-11-20 15:55:24.646948] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:26.537 [2024-11-20 15:55:24.647063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:07:26.798 [2024-11-20 15:55:24.805943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.798 [2024-11-20 15:55:24.909011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.798 [2024-11-20 15:55:24.909097] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.798 [2024-11-20 15:55:24.909115] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.798 [2024-11-20 15:55:24.909124] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.059 00:07:27.059 real 0m0.510s 00:07:27.059 user 0m0.320s 00:07:27.059 sys 0m0.086s 00:07:27.059 15:55:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.059 15:55:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 ************************************ 00:07:27.059 END TEST bdev_json_nonarray 00:07:27.059 ************************************ 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:27.059 15:55:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:27.059 00:07:27.059 real 0m37.312s 00:07:27.059 user 0m58.213s 00:07:27.059 sys 0m5.044s 00:07:27.059 15:55:25 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.059 15:55:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 ************************************ 00:07:27.059 END TEST blockdev_nvme 00:07:27.059 ************************************ 00:07:27.059 15:55:25 -- spdk/autotest.sh@209 -- # uname -s 00:07:27.059 15:55:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:27.059 15:55:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:27.059 15:55:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.059 15:55:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.059 15:55:25 -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 ************************************ 00:07:27.059 START TEST blockdev_nvme_gpt 00:07:27.059 ************************************ 00:07:27.059 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:27.059 * Looking for test storage... 00:07:27.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:27.059 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.059 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.059 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.320 15:55:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.320 --rc genhtml_branch_coverage=1 00:07:27.320 --rc genhtml_function_coverage=1 00:07:27.320 --rc genhtml_legend=1 00:07:27.320 --rc geninfo_all_blocks=1 00:07:27.320 --rc geninfo_unexecuted_blocks=1 00:07:27.320 00:07:27.320 ' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.320 --rc genhtml_branch_coverage=1 00:07:27.320 --rc genhtml_function_coverage=1 00:07:27.320 --rc genhtml_legend=1 00:07:27.320 --rc geninfo_all_blocks=1 00:07:27.320 --rc geninfo_unexecuted_blocks=1 00:07:27.320 00:07:27.320 ' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.320 --rc genhtml_branch_coverage=1 00:07:27.320 --rc genhtml_function_coverage=1 00:07:27.320 --rc genhtml_legend=1 00:07:27.320 --rc geninfo_all_blocks=1 00:07:27.320 --rc geninfo_unexecuted_blocks=1 00:07:27.320 00:07:27.320 ' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.320 --rc genhtml_branch_coverage=1 00:07:27.320 --rc genhtml_function_coverage=1 00:07:27.320 --rc genhtml_legend=1 00:07:27.320 --rc geninfo_all_blocks=1 00:07:27.320 --rc geninfo_unexecuted_blocks=1 00:07:27.320 00:07:27.320 ' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60776 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:27.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60776 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60776 ']' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.320 15:55:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.320 15:55:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:27.320 [2024-11-20 15:55:25.453288] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:27.320 [2024-11-20 15:55:25.453397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60776 ] 00:07:27.581 [2024-11-20 15:55:25.610704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.581 [2024-11-20 15:55:25.709444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.170 15:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.170 15:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:28.170 15:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:28.170 15:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:28.170 15:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:28.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.691 Waiting for block devices as requested 00:07:28.691 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.691 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.951 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.951 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.235 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:34.235 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:34.235 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.236 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:34.236 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:34.236 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:34.236 BYT; 00:07:34.236 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:34.236 BYT; 00:07:34.236 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.236 15:55:32 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.236 15:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:35.181 The operation has completed successfully. 00:07:35.181 15:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:36.123 The operation has completed successfully. 00:07:36.123 15:55:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.268 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.268 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:37.268 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.268 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.268 [] 00:07:37.268 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.268 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:37.268 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:37.268 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:37.268 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:37.530 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:37.530 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.530 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.790 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:37.790 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:37.791 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "36b2852f-fc03-4160-b36c-c97ebcb337cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "36b2852f-fc03-4160-b36c-c97ebcb337cf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a4033ab7-ec38-4427-8350-80169f24ed29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4033ab7-ec38-4427-8350-80169f24ed29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5232f513-25c9-4e77-9e3e-31550f28d65b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5232f513-25c9-4e77-9e3e-31550f28d65b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "43d35acd-256c-47ba-895f-c55e6c387fed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43d35acd-256c-47ba-895f-c55e6c387fed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e137f7b5-3eff-4f8e-9a8b-aaf4d80d617b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e137f7b5-3eff-4f8e-9a8b-aaf4d80d617b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:37.791 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:37.791 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:37.791 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:37.791 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60776 00:07:37.791 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60776 ']' 00:07:37.791 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60776 00:07:37.791 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:37.791 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.791 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60776 00:07:37.791 15:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.791 killing process with pid 60776 00:07:37.791 15:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.791 15:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60776' 00:07:37.791 15:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60776 00:07:37.791 15:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60776 00:07:39.744 15:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:39.744 15:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:39.744 15:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:39.744 15:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.744 15:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.744 ************************************ 00:07:39.744 START TEST bdev_hello_world 00:07:39.744 ************************************ 00:07:39.744 15:55:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:39.744 [2024-11-20 15:55:37.597105] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:39.744 [2024-11-20 15:55:37.597227] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61403 ] 00:07:39.744 [2024-11-20 15:55:37.754306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.744 [2024-11-20 15:55:37.855911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.314 [2024-11-20 15:55:38.402642] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:40.314 [2024-11-20 15:55:38.402691] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:40.314 [2024-11-20 15:55:38.402716] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:40.314 [2024-11-20 15:55:38.405446] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:40.314 [2024-11-20 15:55:38.406547] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:40.314 [2024-11-20 15:55:38.406574] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:40.314 [2024-11-20 15:55:38.407045] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:40.314 00:07:40.314 [2024-11-20 15:55:38.407067] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:40.883 00:07:40.883 real 0m1.596s 00:07:40.883 user 0m1.298s 00:07:40.883 sys 0m0.188s 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.144 ************************************ 00:07:41.144 END TEST bdev_hello_world 00:07:41.144 ************************************ 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.144 15:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.144 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.144 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.144 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.144 ************************************ 00:07:41.144 START TEST bdev_bounds 00:07:41.144 ************************************ 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:41.144 Process bdevio pid: 61440 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61440 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61440' 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61440 00:07:41.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61440 ']' 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.144 15:55:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.144 [2024-11-20 15:55:39.263822] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:41.144 [2024-11-20 15:55:39.263936] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:07:41.409 [2024-11-20 15:55:39.428419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.409 [2024-11-20 15:55:39.534235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.409 [2024-11-20 15:55:39.534648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.409 [2024-11-20 15:55:39.534782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.984 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.984 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:41.984 15:55:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.243 I/O targets: 00:07:42.243 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.243 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:42.243 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:42.243 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.243 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.243 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.243 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.243 00:07:42.243 00:07:42.243 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.243 http://cunit.sourceforge.net/ 00:07:42.243 00:07:42.243 00:07:42.243 Suite: bdevio tests on: Nvme3n1 00:07:42.243 Test: blockdev write read block ...passed 00:07:42.243 Test: blockdev write zeroes read block ...passed 00:07:42.243 Test: blockdev write zeroes read no split ...passed 00:07:42.243 Test: blockdev write zeroes read split ...passed 00:07:42.243 Test: blockdev write zeroes read split partial ...passed 00:07:42.243 Test: blockdev reset ...[2024-11-20 15:55:40.283373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:42.243 passed 00:07:42.243 Test: blockdev write read 8 blocks ...[2024-11-20 15:55:40.286370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:42.243 passed 00:07:42.243 Test: blockdev write read size > 128k ...passed 00:07:42.243 Test: blockdev write read invalid size ...passed 00:07:42.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.243 Test: blockdev write read max offset ...passed 00:07:42.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.243 Test: blockdev writev readv 8 blocks ...passed 00:07:42.243 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.243 Test: blockdev writev readv block ...passed 00:07:42.243 Test: blockdev writev readv size > 128k ...passed 00:07:42.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.243 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.306779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295004000 len:0x1000 00:07:42.243 [2024-11-20 15:55:40.306858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.243 passed 00:07:42.243 Test: blockdev nvme passthru rw ...passed 00:07:42.243 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:55:40.309649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.243 [2024-11-20 15:55:40.309740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.243 passed 00:07:42.243 Test: blockdev nvme admin passthru ...passed 00:07:42.243 Test: blockdev copy ...passed 00:07:42.243 Suite: bdevio tests on: Nvme2n3 00:07:42.243 Test: blockdev write read block ...passed 00:07:42.243 Test: blockdev write zeroes read block ...passed 00:07:42.243 Test: blockdev write zeroes read no split ...passed 00:07:42.243 Test: blockdev write zeroes read split ...passed 00:07:42.243 Test: blockdev write zeroes read split partial ...passed 00:07:42.243 Test: blockdev reset ...[2024-11-20 15:55:40.368755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.243 [2024-11-20 15:55:40.372586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:42.243 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.243 passed 00:07:42.243 Test: blockdev write read size > 128k ...passed 00:07:42.243 Test: blockdev write read invalid size ...passed 00:07:42.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.243 Test: blockdev write read max offset ...passed 00:07:42.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.243 Test: blockdev writev readv 8 blocks ...passed 00:07:42.243 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.243 Test: blockdev writev readv block ...passed 00:07:42.243 Test: blockdev writev readv size > 128k ...passed 00:07:42.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.243 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.394277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295002000 len:0x1000 00:07:42.243 [2024-11-20 15:55:40.394335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.243 passed 00:07:42.243 Test: blockdev nvme passthru rw ...passed 00:07:42.243 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:55:40.396399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.243 [2024-11-20 15:55:40.396441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.243 passed 00:07:42.243 Test: blockdev nvme admin passthru ...passed 00:07:42.243 Test: blockdev copy ...passed 00:07:42.243 Suite: bdevio tests on: Nvme2n2 00:07:42.243 Test: blockdev write read block ...passed 00:07:42.243 Test: blockdev write zeroes read block ...passed 00:07:42.243 Test: blockdev write zeroes read no split ...passed 00:07:42.243 Test: blockdev write zeroes read split ...passed 00:07:42.243 Test: blockdev write zeroes read split partial ...passed 00:07:42.243 Test: blockdev reset ...[2024-11-20 15:55:40.453576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.243 [2024-11-20 15:55:40.458929] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:42.243 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.243 passed 00:07:42.243 Test: blockdev write read size > 128k ...passed 00:07:42.243 Test: blockdev write read invalid size ...passed 00:07:42.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.244 Test: blockdev write read max offset ...passed 00:07:42.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.244 Test: blockdev writev readv 8 blocks ...passed 00:07:42.244 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.244 Test: blockdev writev readv block ...passed 00:07:42.244 Test: blockdev writev readv size > 128k ...passed 00:07:42.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.244 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.479942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2838000 len:0x1000 00:07:42.244 [2024-11-20 15:55:40.479997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.244 passed 00:07:42.244 Test: blockdev nvme passthru rw ...passed 00:07:42.244 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.244 Test: blockdev nvme admin passthru ...[2024-11-20 15:55:40.482394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.244 [2024-11-20 15:55:40.482429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.244 passed 00:07:42.244 Test: blockdev copy ...passed 00:07:42.244 Suite: bdevio tests on: Nvme2n1 00:07:42.244 Test: blockdev write read block ...passed 00:07:42.507 Test: blockdev write zeroes read block ...passed 00:07:42.507 Test: blockdev write zeroes read no split ...passed 00:07:42.507 Test: blockdev write zeroes read split ...passed 00:07:42.507 Test: blockdev write zeroes read split partial ...passed 00:07:42.507 Test: blockdev reset ...[2024-11-20 15:55:40.537835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.507 [2024-11-20 15:55:40.542498] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:42.507 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.507 passed 00:07:42.507 Test: blockdev write read size > 128k ...passed 00:07:42.507 Test: blockdev write read invalid size ...passed 00:07:42.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.507 Test: blockdev write read max offset ...passed 00:07:42.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.507 Test: blockdev writev readv 8 blocks ...passed 00:07:42.507 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.507 Test: blockdev writev readv block ...passed 00:07:42.507 Test: blockdev writev readv size > 128k ...passed 00:07:42.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.507 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.562421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2834000 len:0x1000 00:07:42.507 [2024-11-20 15:55:40.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.507 passed 00:07:42.507 Test: blockdev nvme passthru rw ...passed 00:07:42.507 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.507 Test: blockdev nvme admin passthru ...[2024-11-20 15:55:40.565316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.507 [2024-11-20 15:55:40.565349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.507 passed 00:07:42.507 Test: blockdev copy ...passed 00:07:42.507 Suite: bdevio tests on: Nvme1n1p2 00:07:42.507 Test: blockdev write read block ...passed 00:07:42.507 Test: blockdev write zeroes read block ...passed 00:07:42.507 Test: blockdev write zeroes read no split ...passed 00:07:42.507 Test: blockdev write zeroes read split ...passed 00:07:42.507 Test: blockdev write zeroes read split partial ...passed 00:07:42.507 Test: blockdev reset ...[2024-11-20 15:55:40.623771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:42.507 [2024-11-20 15:55:40.627694] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:42.507 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.507 passed 00:07:42.507 Test: blockdev write read size > 128k ...passed 00:07:42.507 Test: blockdev write read invalid size ...passed 00:07:42.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.507 Test: blockdev write read max offset ...passed 00:07:42.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.507 Test: blockdev writev readv 8 blocks ...passed 00:07:42.507 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.507 Test: blockdev writev readv block ...passed 00:07:42.507 Test: blockdev writev readv size > 128k ...passed 00:07:42.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.507 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.648232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d2830000 len:0x1000 00:07:42.507 [2024-11-20 15:55:40.648432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.507 passed 00:07:42.507 Test: blockdev nvme passthru rw ...passed 00:07:42.507 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.507 Test: blockdev nvme admin passthru ...passed 00:07:42.507 Test: blockdev copy ...passed 00:07:42.507 Suite: bdevio tests on: Nvme1n1p1 00:07:42.507 Test: blockdev write read block ...passed 00:07:42.507 Test: blockdev write zeroes read block ...passed 00:07:42.507 Test: blockdev write zeroes read no split ...passed 00:07:42.507 Test: blockdev write zeroes read split ...passed 00:07:42.507 Test: blockdev write zeroes read split partial ...passed 00:07:42.507 Test: blockdev reset ...[2024-11-20 15:55:40.703990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:42.507 [2024-11-20 15:55:40.708296] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:42.507 passed 00:07:42.507 Test: blockdev write read 8 blocks ...passed 00:07:42.507 Test: blockdev write read size > 128k ...passed 00:07:42.507 Test: blockdev write read invalid size ...passed 00:07:42.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.507 Test: blockdev write read max offset ...passed 00:07:42.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.507 Test: blockdev writev readv 8 blocks ...passed 00:07:42.507 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.507 Test: blockdev writev readv block ...passed 00:07:42.507 Test: blockdev writev readv size > 128k ...passed 00:07:42.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.507 Test: blockdev comparev and writev ...[2024-11-20 15:55:40.727366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:07:42.507 Test: blockdev nvme passthru rw ...passed 00:07:42.507 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.507 Test: blockdev nvme admin passthru ...passed 00:07:42.507 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x295a0e000 len:0x1000 00:07:42.507 [2024-11-20 15:55:40.727501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.507 passed 00:07:42.507 Suite: bdevio tests on: Nvme0n1 00:07:42.507 Test: blockdev write read block ...passed 00:07:42.507 Test: blockdev write zeroes read block ...passed 00:07:42.507 Test: blockdev write zeroes read no split ...passed 00:07:42.768 Test: blockdev write zeroes read split ...passed 00:07:42.768 Test: blockdev write zeroes read split partial ...passed 00:07:42.768 Test: blockdev reset ...[2024-11-20 15:55:40.782720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:42.768 [2024-11-20 15:55:40.787513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:42.768 Test: blockdev write read 8 blocks ...uccessful. 00:07:42.768 passed 00:07:42.768 Test: blockdev write read size > 128k ...passed 00:07:42.768 Test: blockdev write read invalid size ...passed 00:07:42.768 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.768 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.768 Test: blockdev write read max offset ...passed 00:07:42.768 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.768 Test: blockdev writev readv 8 blocks ...passed 00:07:42.768 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.768 Test: blockdev writev readv block ...passed 00:07:42.768 Test: blockdev writev readv size > 128k ...passed 00:07:42.768 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.768 Test: blockdev comparev and writev ...passed 00:07:42.768 Test: blockdev nvme passthru rw ...[2024-11-20 15:55:40.802883] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:42.768 separate metadata which is not supported yet. 00:07:42.768 passed 00:07:42.768 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.768 Test: blockdev nvme admin passthru ...[2024-11-20 15:55:40.804103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:42.768 [2024-11-20 15:55:40.804150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:42.768 passed 00:07:42.768 Test: blockdev copy ...passed 00:07:42.768 00:07:42.768 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.768 suites 7 7 n/a 0 0 00:07:42.768 tests 161 161 161 0 0 00:07:42.768 asserts 1025 1025 1025 0 n/a 00:07:42.768 00:07:42.768 Elapsed time = 1.457 seconds 00:07:42.768 0 00:07:42.768 15:55:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61440 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61440 ']' 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61440 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61440 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61440' 00:07:42.769 killing process with pid 61440 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61440 00:07:42.769 15:55:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61440 00:07:43.342 15:55:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:43.342 00:07:43.342 real 0m2.334s 00:07:43.342 user 0m5.890s 00:07:43.342 sys 0m0.298s 00:07:43.342 ************************************ 00:07:43.342 END TEST bdev_bounds 00:07:43.342 ************************************ 00:07:43.342 15:55:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.342 15:55:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:43.342 15:55:41 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.342 15:55:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.342 15:55:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.342 15:55:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 ************************************ 00:07:43.603 START TEST bdev_nbd 00:07:43.603 ************************************ 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:43.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61500 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61500 /var/tmp/spdk-nbd.sock 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61500 ']' 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:43.603 15:55:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:43.603 [2024-11-20 15:55:41.672680] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:43.603 [2024-11-20 15:55:41.673299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.603 [2024-11-20 15:55:41.838146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.865 [2024-11-20 15:55:41.971602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.437 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.699 1+0 records in 00:07:44.699 1+0 records out 00:07:44.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140452 s, 2.9 MB/s 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.699 15:55:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.958 1+0 records in 00:07:44.958 1+0 records out 00:07:44.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761314 s, 5.4 MB/s 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.958 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.219 1+0 records in 00:07:45.219 1+0 records out 00:07:45.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795487 s, 5.1 MB/s 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.219 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.479 1+0 records in 00:07:45.479 1+0 records out 00:07:45.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950789 s, 4.3 MB/s 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.479 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.740 1+0 records in 00:07:45.740 1+0 records out 00:07:45.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131253 s, 3.1 MB/s 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.740 15:55:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.000 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.000 1+0 records in 00:07:46.000 1+0 records out 00:07:46.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100784 s, 4.1 MB/s 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.001 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.260 1+0 records in 00:07:46.260 1+0 records out 00:07:46.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106696 s, 3.8 MB/s 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd0", 00:07:46.260 "bdev_name": "Nvme0n1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd1", 00:07:46.260 "bdev_name": "Nvme1n1p1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd2", 00:07:46.260 "bdev_name": "Nvme1n1p2" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd3", 00:07:46.260 "bdev_name": "Nvme2n1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd4", 00:07:46.260 "bdev_name": "Nvme2n2" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd5", 00:07:46.260 "bdev_name": "Nvme2n3" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd6", 00:07:46.260 "bdev_name": "Nvme3n1" 00:07:46.260 } 00:07:46.260 ]' 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd0", 00:07:46.260 "bdev_name": "Nvme0n1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd1", 00:07:46.260 "bdev_name": "Nvme1n1p1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd2", 00:07:46.260 "bdev_name": "Nvme1n1p2" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd3", 00:07:46.260 "bdev_name": "Nvme2n1" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd4", 00:07:46.260 "bdev_name": "Nvme2n2" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd5", 00:07:46.260 "bdev_name": "Nvme2n3" 00:07:46.260 }, 00:07:46.260 { 00:07:46.260 "nbd_device": "/dev/nbd6", 00:07:46.260 "bdev_name": "Nvme3n1" 00:07:46.260 } 00:07:46.260 ]' 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.260 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.261 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.520 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.779 15:55:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.039 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.300 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.561 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:47.821 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.821 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.821 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.821 15:55:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.821 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.082 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:48.342 /dev/nbd0 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.342 1+0 records in 00:07:48.342 1+0 records out 00:07:48.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110285 s, 3.7 MB/s 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.342 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:48.603 /dev/nbd1 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.603 1+0 records in 00:07:48.603 1+0 records out 00:07:48.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130095 s, 3.1 MB/s 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.603 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:48.865 /dev/nbd10 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.865 1+0 records in 00:07:48.865 1+0 records out 00:07:48.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104548 s, 3.9 MB/s 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.865 15:55:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.865 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.865 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.865 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.865 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.865 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:49.127 /dev/nbd11 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.127 1+0 records in 00:07:49.127 1+0 records out 00:07:49.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122761 s, 3.3 MB/s 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.127 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.128 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.128 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.128 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.128 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:49.389 /dev/nbd12 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.389 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.390 1+0 records in 00:07:49.390 1+0 records out 00:07:49.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103015 s, 4.0 MB/s 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.390 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:49.654 /dev/nbd13 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.654 1+0 records in 00:07:49.654 1+0 records out 00:07:49.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123097 s, 3.3 MB/s 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.654 15:55:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:49.916 /dev/nbd14 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.916 1+0 records in 00:07:49.916 1+0 records out 00:07:49.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128479 s, 3.2 MB/s 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.916 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd0", 00:07:50.178 "bdev_name": "Nvme0n1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd1", 00:07:50.178 "bdev_name": "Nvme1n1p1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd10", 00:07:50.178 "bdev_name": "Nvme1n1p2" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd11", 00:07:50.178 "bdev_name": "Nvme2n1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd12", 00:07:50.178 "bdev_name": "Nvme2n2" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd13", 00:07:50.178 "bdev_name": "Nvme2n3" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd14", 00:07:50.178 "bdev_name": "Nvme3n1" 00:07:50.178 } 00:07:50.178 ]' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd0", 00:07:50.178 "bdev_name": "Nvme0n1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd1", 00:07:50.178 "bdev_name": "Nvme1n1p1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd10", 00:07:50.178 "bdev_name": "Nvme1n1p2" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd11", 00:07:50.178 "bdev_name": "Nvme2n1" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd12", 00:07:50.178 "bdev_name": "Nvme2n2" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd13", 00:07:50.178 "bdev_name": "Nvme2n3" 00:07:50.178 }, 00:07:50.178 { 00:07:50.178 "nbd_device": "/dev/nbd14", 00:07:50.178 "bdev_name": "Nvme3n1" 00:07:50.178 } 00:07:50.178 ]' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:50.178 /dev/nbd1 00:07:50.178 /dev/nbd10 00:07:50.178 /dev/nbd11 00:07:50.178 /dev/nbd12 00:07:50.178 /dev/nbd13 00:07:50.178 /dev/nbd14' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:50.178 /dev/nbd1 00:07:50.178 /dev/nbd10 00:07:50.178 /dev/nbd11 00:07:50.178 /dev/nbd12 00:07:50.178 /dev/nbd13 00:07:50.178 /dev/nbd14' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:50.178 256+0 records in 00:07:50.178 256+0 records out 00:07:50.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749015 s, 140 MB/s 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.178 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:50.439 256+0 records in 00:07:50.439 256+0 records out 00:07:50.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264052 s, 4.0 MB/s 00:07:50.439 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.439 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:50.700 256+0 records in 00:07:50.700 256+0 records out 00:07:50.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.268571 s, 3.9 MB/s 00:07:50.700 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.700 15:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:50.961 256+0 records in 00:07:50.961 256+0 records out 00:07:50.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227073 s, 4.6 MB/s 00:07:50.961 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.961 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:51.221 256+0 records in 00:07:51.221 256+0 records out 00:07:51.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.268996 s, 3.9 MB/s 00:07:51.221 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.221 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:51.481 256+0 records in 00:07:51.481 256+0 records out 00:07:51.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264472 s, 4.0 MB/s 00:07:51.481 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.481 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:51.742 256+0 records in 00:07:51.742 256+0 records out 00:07:51.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222007 s, 4.7 MB/s 00:07:51.742 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.742 15:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:52.002 256+0 records in 00:07:52.002 256+0 records out 00:07:52.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251613 s, 4.2 MB/s 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.002 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.263 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.524 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.784 15:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.044 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.304 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.565 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.826 15:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:54.088 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:54.350 malloc_lvol_verify 00:07:54.350 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:54.350 b4e94364-0815-4dd5-8b1b-3d4770cc203e 00:07:54.350 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:54.611 ef45cef1-1ae2-4b0b-90a6-35836c83c4f6 00:07:54.611 15:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:54.871 /dev/nbd0 00:07:54.871 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:54.871 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:54.871 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:54.871 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:54.872 mke2fs 1.47.0 (5-Feb-2023) 00:07:54.872 Discarding device blocks: 0/4096 done 00:07:54.872 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:54.872 00:07:54.872 Allocating group tables: 0/1 done 00:07:54.872 Writing inode tables: 0/1 done 00:07:54.872 Creating journal (1024 blocks): done 00:07:54.872 Writing superblocks and filesystem accounting information: 0/1 done 00:07:54.872 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.872 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.132 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61500 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61500 ']' 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61500 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61500 00:07:55.133 killing process with pid 61500 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61500' 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61500 00:07:55.133 15:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61500 00:07:56.074 ************************************ 00:07:56.074 END TEST bdev_nbd 00:07:56.074 ************************************ 00:07:56.074 15:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:56.074 00:07:56.074 real 0m12.466s 00:07:56.074 user 0m17.002s 00:07:56.074 sys 0m4.008s 00:07:56.074 15:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.074 15:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:56.074 skipping fio tests on NVMe due to multi-ns failures. 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.074 15:55:54 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:56.074 15:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:56.074 15:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.074 15:55:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.074 ************************************ 00:07:56.074 START TEST bdev_verify 00:07:56.074 ************************************ 00:07:56.074 15:55:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:56.074 [2024-11-20 15:55:54.218120] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:56.074 [2024-11-20 15:55:54.218243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61928 ] 00:07:56.335 [2024-11-20 15:55:54.379423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:56.335 [2024-11-20 15:55:54.483358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.335 [2024-11-20 15:55:54.483496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.908 Running I/O for 5 seconds... 00:07:59.239 21120.00 IOPS, 82.50 MiB/s [2024-11-20T15:55:58.430Z] 19840.00 IOPS, 77.50 MiB/s [2024-11-20T15:55:59.387Z] 19413.33 IOPS, 75.83 MiB/s [2024-11-20T15:56:00.357Z] 19360.00 IOPS, 75.62 MiB/s [2024-11-20T15:56:00.357Z] 19225.60 IOPS, 75.10 MiB/s 00:08:02.107 Latency(us) 00:08:02.107 [2024-11-20T15:56:00.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0xbd0bd 00:08:02.107 Nvme0n1 : 5.07 1337.91 5.23 0.00 0.00 95326.53 21677.29 88322.36 00:08:02.107 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:02.107 Nvme0n1 : 5.07 1364.05 5.33 0.00 0.00 93493.52 21979.77 84692.68 00:08:02.107 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x4ff80 00:08:02.107 Nvme1n1p1 : 5.07 1337.10 5.22 0.00 0.00 95058.48 23088.84 84692.68 00:08:02.107 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:02.107 Nvme1n1p1 : 5.07 1363.64 5.33 0.00 0.00 93419.48 24500.38 82272.89 00:08:02.107 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x4ff7f 00:08:02.107 Nvme1n1p2 : 5.08 1336.40 5.22 0.00 0.00 94848.16 24702.03 83079.48 00:08:02.107 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:02.107 Nvme1n1p2 : 5.07 1363.24 5.33 0.00 0.00 93276.01 26012.75 80659.69 00:08:02.107 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x80000 00:08:02.107 Nvme2n1 : 5.08 1335.81 5.22 0.00 0.00 94674.95 25710.28 82272.89 00:08:02.107 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x80000 length 0x80000 00:08:02.107 Nvme2n1 : 5.07 1362.45 5.32 0.00 0.00 93187.17 25609.45 78643.20 00:08:02.107 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x80000 00:08:02.107 Nvme2n2 : 5.09 1345.31 5.26 0.00 0.00 93954.53 5898.24 84289.38 00:08:02.107 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x80000 length 0x80000 00:08:02.107 Nvme2n2 : 5.08 1361.77 5.32 0.00 0.00 93071.67 23592.96 81869.59 00:08:02.107 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x80000 00:08:02.107 Nvme2n3 : 5.09 1344.96 5.25 0.00 0.00 93877.53 5545.35 84692.68 00:08:02.107 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x80000 length 0x80000 00:08:02.107 Nvme2n3 : 5.09 1371.37 5.36 0.00 0.00 92379.62 4990.82 84289.38 00:08:02.107 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x0 length 0x20000 00:08:02.107 Nvme3n1 : 5.09 1344.61 5.25 0.00 0.00 93806.94 5318.50 85499.27 00:08:02.107 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.107 Verification LBA range: start 0x20000 length 0x20000 00:08:02.107 Nvme3n1 : 5.09 1370.82 5.35 0.00 0.00 92280.92 6150.30 83482.78 00:08:02.107 [2024-11-20T15:56:00.357Z] =================================================================================================================== 00:08:02.107 [2024-11-20T15:56:00.357Z] Total : 18939.45 73.98 0.00 0.00 93752.49 4990.82 88322.36 00:08:03.490 00:08:03.490 real 0m7.218s 00:08:03.490 user 0m13.452s 00:08:03.490 sys 0m0.239s 00:08:03.490 15:56:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.490 15:56:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:03.490 ************************************ 00:08:03.490 END TEST bdev_verify 00:08:03.490 ************************************ 00:08:03.490 15:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:03.490 15:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:03.490 15:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.490 15:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.490 ************************************ 00:08:03.490 START TEST bdev_verify_big_io 00:08:03.490 ************************************ 00:08:03.490 15:56:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:03.490 [2024-11-20 15:56:01.497223] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:03.491 [2024-11-20 15:56:01.497350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62021 ] 00:08:03.491 [2024-11-20 15:56:01.657319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.751 [2024-11-20 15:56:01.765361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.751 [2024-11-20 15:56:01.765464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.322 Running I/O for 5 seconds... 00:08:08.263 0.00 IOPS, 0.00 MiB/s [2024-11-20T15:56:09.060Z] 1089.00 IOPS, 68.06 MiB/s [2024-11-20T15:56:09.060Z] 2241.33 IOPS, 140.08 MiB/s 00:08:10.810 Latency(us) 00:08:10.810 [2024-11-20T15:56:09.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.810 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0xbd0b 00:08:10.810 Nvme0n1 : 5.93 97.17 6.07 0.00 0.00 1242791.95 17946.78 1322818.95 00:08:10.810 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:10.810 Nvme0n1 : 5.73 100.50 6.28 0.00 0.00 1218622.84 28835.84 1322818.95 00:08:10.810 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x4ff8 00:08:10.810 Nvme1n1p1 : 6.03 101.77 6.36 0.00 0.00 1162628.02 94371.84 1122782.92 00:08:10.810 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:10.810 Nvme1n1p1 : 5.85 103.62 6.48 0.00 0.00 1153494.83 113730.17 1122782.92 00:08:10.810 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x4ff7 00:08:10.810 Nvme1n1p2 : 6.03 101.59 6.35 0.00 0.00 1124116.83 94371.84 967916.31 00:08:10.810 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:10.810 Nvme1n1p2 : 5.85 103.75 6.48 0.00 0.00 1114204.69 114536.76 1032444.06 00:08:10.810 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x8000 00:08:10.810 Nvme2n1 : 6.04 106.01 6.63 0.00 0.00 1057722.68 102034.51 1000180.18 00:08:10.810 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x8000 length 0x8000 00:08:10.810 Nvme2n1 : 5.96 107.38 6.71 0.00 0.00 1045769.29 106470.79 1058255.16 00:08:10.810 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x8000 00:08:10.810 Nvme2n2 : 6.14 109.03 6.81 0.00 0.00 991626.22 95581.74 1019538.51 00:08:10.810 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x8000 length 0x8000 00:08:10.810 Nvme2n2 : 6.09 115.50 7.22 0.00 0.00 947702.19 44766.13 1109877.37 00:08:10.810 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x8000 00:08:10.810 Nvme2n3 : 6.21 118.03 7.38 0.00 0.00 894662.58 12653.49 1284102.30 00:08:10.810 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x8000 length 0x8000 00:08:10.810 Nvme2n3 : 6.14 119.53 7.47 0.00 0.00 885090.12 49404.06 1135688.47 00:08:10.810 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x0 length 0x2000 00:08:10.810 Nvme3n1 : 6.22 120.90 7.56 0.00 0.00 845238.33 7057.72 2051982.57 00:08:10.810 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.810 Verification LBA range: start 0x2000 length 0x2000 00:08:10.810 Nvme3n1 : 6.20 131.90 8.24 0.00 0.00 779999.84 3377.62 1632552.17 00:08:10.810 [2024-11-20T15:56:09.060Z] =================================================================================================================== 00:08:10.810 [2024-11-20T15:56:09.060Z] Total : 1536.69 96.04 0.00 0.00 1018319.43 3377.62 2051982.57 00:08:12.191 00:08:12.191 real 0m8.833s 00:08:12.191 user 0m16.697s 00:08:12.191 sys 0m0.248s 00:08:12.191 ************************************ 00:08:12.191 END TEST bdev_verify_big_io 00:08:12.191 ************************************ 00:08:12.191 15:56:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.191 15:56:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:12.191 15:56:10 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.191 15:56:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:12.191 15:56:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.191 15:56:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.191 ************************************ 00:08:12.191 START TEST bdev_write_zeroes 00:08:12.191 ************************************ 00:08:12.191 15:56:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.191 [2024-11-20 15:56:10.393966] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:12.191 [2024-11-20 15:56:10.394271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62136 ] 00:08:12.452 [2024-11-20 15:56:10.555171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.452 [2024-11-20 15:56:10.662031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.022 Running I/O for 1 seconds... 00:08:14.472 53696.00 IOPS, 209.75 MiB/s 00:08:14.472 Latency(us) 00:08:14.472 [2024-11-20T15:56:12.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.472 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme0n1 : 1.03 7675.25 29.98 0.00 0.00 16609.82 7914.73 33877.07 00:08:14.472 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme1n1p1 : 1.03 7665.84 29.94 0.00 0.00 16607.24 12351.02 34885.32 00:08:14.472 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme1n1p2 : 1.03 7705.17 30.10 0.00 0.00 16432.54 8670.92 30045.74 00:08:14.472 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme2n1 : 1.03 7696.50 30.06 0.00 0.00 16388.86 9023.80 27424.30 00:08:14.472 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme2n2 : 1.03 7687.90 30.03 0.00 0.00 16371.84 9527.93 25710.28 00:08:14.472 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme2n3 : 1.03 7679.09 30.00 0.00 0.00 16340.59 9275.86 26617.70 00:08:14.472 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:14.472 Nvme3n1 : 1.03 7608.65 29.72 0.00 0.00 16456.65 9729.58 30650.68 00:08:14.472 [2024-11-20T15:56:12.722Z] =================================================================================================================== 00:08:14.472 [2024-11-20T15:56:12.722Z] Total : 53718.39 209.84 0.00 0.00 16457.88 7914.73 34885.32 00:08:15.046 00:08:15.046 real 0m2.724s 00:08:15.046 user 0m2.414s 00:08:15.046 sys 0m0.190s 00:08:15.046 15:56:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.046 ************************************ 00:08:15.046 END TEST bdev_write_zeroes 00:08:15.046 ************************************ 00:08:15.046 15:56:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:15.047 15:56:13 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.047 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:15.047 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.047 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:15.047 ************************************ 00:08:15.047 START TEST bdev_json_nonenclosed 00:08:15.047 ************************************ 00:08:15.047 15:56:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.047 [2024-11-20 15:56:13.184929] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:15.047 [2024-11-20 15:56:13.185050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:08:15.307 [2024-11-20 15:56:13.344656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.307 [2024-11-20 15:56:13.446745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.307 [2024-11-20 15:56:13.446821] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:15.307 [2024-11-20 15:56:13.446838] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:15.307 [2024-11-20 15:56:13.446847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.568 00:08:15.568 real 0m0.512s 00:08:15.568 user 0m0.318s 00:08:15.568 sys 0m0.090s 00:08:15.568 ************************************ 00:08:15.568 END TEST bdev_json_nonenclosed 00:08:15.568 ************************************ 00:08:15.568 15:56:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.568 15:56:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 15:56:13 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.568 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:15.568 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.568 15:56:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 ************************************ 00:08:15.568 START TEST bdev_json_nonarray 00:08:15.568 ************************************ 00:08:15.568 15:56:13 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.568 [2024-11-20 15:56:13.766685] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:15.568 [2024-11-20 15:56:13.766817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62209 ] 00:08:15.828 [2024-11-20 15:56:13.920936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.828 [2024-11-20 15:56:14.023049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.828 [2024-11-20 15:56:14.023303] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:15.828 [2024-11-20 15:56:14.023326] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:15.828 [2024-11-20 15:56:14.023337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.090 00:08:16.090 real 0m0.505s 00:08:16.090 user 0m0.306s 00:08:16.090 sys 0m0.093s 00:08:16.090 ************************************ 00:08:16.090 END TEST bdev_json_nonarray 00:08:16.090 ************************************ 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:16.090 15:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:16.090 15:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:16.090 15:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:16.090 15:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.090 15:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.090 15:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.090 ************************************ 00:08:16.090 START TEST bdev_gpt_uuid 00:08:16.090 ************************************ 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62240 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62240 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62240 ']' 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.090 15:56:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:16.351 [2024-11-20 15:56:14.348439] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:16.351 [2024-11-20 15:56:14.348739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62240 ] 00:08:16.351 [2024-11-20 15:56:14.506840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.613 [2024-11-20 15:56:14.609758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.183 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.183 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:17.183 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:17.183 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.183 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 Some configs were skipped because the RPC state that can call them passed over. 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.444 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:17.445 { 00:08:17.445 "name": "Nvme1n1p1", 00:08:17.445 "aliases": [ 00:08:17.445 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:17.445 ], 00:08:17.445 "product_name": "GPT Disk", 00:08:17.445 "block_size": 4096, 00:08:17.445 "num_blocks": 655104, 00:08:17.445 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:17.445 "assigned_rate_limits": { 00:08:17.445 "rw_ios_per_sec": 0, 00:08:17.445 "rw_mbytes_per_sec": 0, 00:08:17.445 "r_mbytes_per_sec": 0, 00:08:17.445 "w_mbytes_per_sec": 0 00:08:17.445 }, 00:08:17.445 "claimed": false, 00:08:17.445 "zoned": false, 00:08:17.445 "supported_io_types": { 00:08:17.445 "read": true, 00:08:17.445 "write": true, 00:08:17.445 "unmap": true, 00:08:17.445 "flush": true, 00:08:17.445 "reset": true, 00:08:17.445 "nvme_admin": false, 00:08:17.445 "nvme_io": false, 00:08:17.445 "nvme_io_md": false, 00:08:17.445 "write_zeroes": true, 00:08:17.445 "zcopy": false, 00:08:17.445 "get_zone_info": false, 00:08:17.445 "zone_management": false, 00:08:17.445 "zone_append": false, 00:08:17.445 "compare": true, 00:08:17.445 "compare_and_write": false, 00:08:17.445 "abort": true, 00:08:17.445 "seek_hole": false, 00:08:17.445 "seek_data": false, 00:08:17.445 "copy": true, 00:08:17.445 "nvme_iov_md": false 00:08:17.445 }, 00:08:17.445 "driver_specific": { 00:08:17.445 "gpt": { 00:08:17.445 "base_bdev": "Nvme1n1", 00:08:17.445 "offset_blocks": 256, 00:08:17.445 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:17.445 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:17.445 "partition_name": "SPDK_TEST_first" 00:08:17.445 } 00:08:17.445 } 00:08:17.445 } 00:08:17.445 ]' 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:17.445 { 00:08:17.445 "name": "Nvme1n1p2", 00:08:17.445 "aliases": [ 00:08:17.445 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:17.445 ], 00:08:17.445 "product_name": "GPT Disk", 00:08:17.445 "block_size": 4096, 00:08:17.445 "num_blocks": 655103, 00:08:17.445 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:17.445 "assigned_rate_limits": { 00:08:17.445 "rw_ios_per_sec": 0, 00:08:17.445 "rw_mbytes_per_sec": 0, 00:08:17.445 "r_mbytes_per_sec": 0, 00:08:17.445 "w_mbytes_per_sec": 0 00:08:17.445 }, 00:08:17.445 "claimed": false, 00:08:17.445 "zoned": false, 00:08:17.445 "supported_io_types": { 00:08:17.445 "read": true, 00:08:17.445 "write": true, 00:08:17.445 "unmap": true, 00:08:17.445 "flush": true, 00:08:17.445 "reset": true, 00:08:17.445 "nvme_admin": false, 00:08:17.445 "nvme_io": false, 00:08:17.445 "nvme_io_md": false, 00:08:17.445 "write_zeroes": true, 00:08:17.445 "zcopy": false, 00:08:17.445 "get_zone_info": false, 00:08:17.445 "zone_management": false, 00:08:17.445 "zone_append": false, 00:08:17.445 "compare": true, 00:08:17.445 "compare_and_write": false, 00:08:17.445 "abort": true, 00:08:17.445 "seek_hole": false, 00:08:17.445 "seek_data": false, 00:08:17.445 "copy": true, 00:08:17.445 "nvme_iov_md": false 00:08:17.445 }, 00:08:17.445 "driver_specific": { 00:08:17.445 "gpt": { 00:08:17.445 "base_bdev": "Nvme1n1", 00:08:17.445 "offset_blocks": 655360, 00:08:17.445 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:17.445 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:17.445 "partition_name": "SPDK_TEST_second" 00:08:17.445 } 00:08:17.445 } 00:08:17.445 } 00:08:17.445 ]' 00:08:17.445 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:17.705 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:17.705 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62240 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62240 ']' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62240 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62240 00:08:17.706 killing process with pid 62240 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62240' 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62240 00:08:17.706 15:56:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62240 00:08:19.635 00:08:19.635 real 0m3.086s 00:08:19.635 user 0m3.224s 00:08:19.635 sys 0m0.382s 00:08:19.635 15:56:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.635 15:56:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:19.635 ************************************ 00:08:19.635 END TEST bdev_gpt_uuid 00:08:19.635 ************************************ 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:19.635 15:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:19.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.635 Waiting for block devices as requested 00:08:19.896 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.896 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.896 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.896 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.173 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:25.173 15:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:25.173 15:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:25.430 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:25.430 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:25.430 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:25.430 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:25.430 15:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:25.430 00:08:25.430 real 0m58.267s 00:08:25.430 user 1m13.893s 00:08:25.430 sys 0m8.300s 00:08:25.430 15:56:23 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.430 ************************************ 00:08:25.430 END TEST blockdev_nvme_gpt 00:08:25.430 ************************************ 00:08:25.430 15:56:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:25.430 15:56:23 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:25.430 15:56:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.430 15:56:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.430 15:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.430 ************************************ 00:08:25.430 START TEST nvme 00:08:25.430 ************************************ 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:25.430 * Looking for test storage... 00:08:25.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.430 15:56:23 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.430 15:56:23 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.430 15:56:23 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.430 15:56:23 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.430 15:56:23 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.430 15:56:23 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:25.430 15:56:23 nvme -- scripts/common.sh@345 -- # : 1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.430 15:56:23 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.430 15:56:23 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@353 -- # local d=1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.430 15:56:23 nvme -- scripts/common.sh@355 -- # echo 1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.430 15:56:23 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@353 -- # local d=2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.430 15:56:23 nvme -- scripts/common.sh@355 -- # echo 2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.430 15:56:23 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.430 15:56:23 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.430 15:56:23 nvme -- scripts/common.sh@368 -- # return 0 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.430 --rc genhtml_branch_coverage=1 00:08:25.430 --rc genhtml_function_coverage=1 00:08:25.430 --rc genhtml_legend=1 00:08:25.430 --rc geninfo_all_blocks=1 00:08:25.430 --rc geninfo_unexecuted_blocks=1 00:08:25.430 00:08:25.430 ' 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.430 --rc genhtml_branch_coverage=1 00:08:25.430 --rc genhtml_function_coverage=1 00:08:25.430 --rc genhtml_legend=1 00:08:25.430 --rc geninfo_all_blocks=1 00:08:25.430 --rc geninfo_unexecuted_blocks=1 00:08:25.430 00:08:25.430 ' 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.430 --rc genhtml_branch_coverage=1 00:08:25.430 --rc genhtml_function_coverage=1 00:08:25.430 --rc genhtml_legend=1 00:08:25.430 --rc geninfo_all_blocks=1 00:08:25.430 --rc geninfo_unexecuted_blocks=1 00:08:25.430 00:08:25.430 ' 00:08:25.430 15:56:23 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.430 --rc genhtml_branch_coverage=1 00:08:25.430 --rc genhtml_function_coverage=1 00:08:25.430 --rc genhtml_legend=1 00:08:25.430 --rc geninfo_all_blocks=1 00:08:25.430 --rc geninfo_unexecuted_blocks=1 00:08:25.430 00:08:25.430 ' 00:08:25.430 15:56:23 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:25.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.561 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.561 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.561 15:56:24 nvme -- nvme/nvme.sh@79 -- # uname 00:08:26.561 15:56:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:26.561 15:56:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:26.561 15:56:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1075 -- # stubpid=62876 00:08:26.561 Waiting for stub to ready for secondary processes... 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62876 ]] 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:26.561 15:56:24 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:26.561 [2024-11-20 15:56:24.685684] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:26.561 [2024-11-20 15:56:24.685889] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:27.493 [2024-11-20 15:56:25.457135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.493 [2024-11-20 15:56:25.550446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.493 [2024-11-20 15:56:25.550498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.493 [2024-11-20 15:56:25.550507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.493 [2024-11-20 15:56:25.563801] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:27.493 [2024-11-20 15:56:25.563834] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:27.493 [2024-11-20 15:56:25.572914] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:27.493 [2024-11-20 15:56:25.572990] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:27.493 [2024-11-20 15:56:25.575488] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:27.493 [2024-11-20 15:56:25.576123] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:27.493 [2024-11-20 15:56:25.576282] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:27.493 [2024-11-20 15:56:25.580479] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:27.493 [2024-11-20 15:56:25.580778] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:27.493 [2024-11-20 15:56:25.580943] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:27.493 [2024-11-20 15:56:25.583322] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:27.494 [2024-11-20 15:56:25.583498] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:27.494 [2024-11-20 15:56:25.583573] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:27.494 [2024-11-20 15:56:25.583623] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:27.494 [2024-11-20 15:56:25.583669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:27.494 15:56:25 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:27.494 done. 00:08:27.494 15:56:25 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:27.494 15:56:25 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:27.494 15:56:25 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:27.494 15:56:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.494 15:56:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 ************************************ 00:08:27.494 START TEST nvme_reset 00:08:27.494 ************************************ 00:08:27.494 15:56:25 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:27.752 Initializing NVMe Controllers 00:08:27.752 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:27.752 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:27.752 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:27.752 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:27.752 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:27.752 00:08:27.752 real 0m0.225s 00:08:27.752 user 0m0.073s 00:08:27.752 sys 0m0.102s 00:08:27.752 15:56:25 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.752 15:56:25 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:27.752 ************************************ 00:08:27.752 END TEST nvme_reset 00:08:27.752 ************************************ 00:08:27.752 15:56:25 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:27.752 15:56:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.752 15:56:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.752 15:56:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.752 ************************************ 00:08:27.752 START TEST nvme_identify 00:08:27.752 ************************************ 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:27.752 15:56:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:27.752 15:56:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:27.752 15:56:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:27.752 15:56:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:27.752 15:56:25 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:27.752 15:56:25 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:28.011 ===================================================== 00:08:28.011 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.011 ===================================================== 00:08:28.011 Controller Capabilities/Features 00:08:28.011 ================================ 00:08:28.011 Vendor ID: 1b36 00:08:28.011 Subsystem Vendor ID: 1af4 00:08:28.011 Serial Number: 12340 00:08:28.011 Model Number: QEMU NVMe Ctrl 00:08:28.012 Firmware Version: 8.0.0 00:08:28.012 Recommended Arb Burst: 6 00:08:28.012 IEEE OUI Identifier: 00 54 52 00:08:28.012 Multi-path I/O 00:08:28.012 May have multiple subsystem ports: No 00:08:28.012 May have multiple controllers: No 00:08:28.012 Associated with SR-IOV VF: No 00:08:28.012 Max Data Transfer Size: 524288 00:08:28.012 Max Number of Namespaces: 256 00:08:28.012 Max Number of I/O Queues: 64 00:08:28.012 NVMe Specification Version (VS): 1.4 00:08:28.012 NVMe Specification Version (Identify): 1.4 00:08:28.012 Maximum Queue Entries: 2048 00:08:28.012 Contiguous Queues Required: Yes 00:08:28.012 Arbitration Mechanisms Supported 00:08:28.012 Weighted Round Robin: Not Supported 00:08:28.012 Vendor Specific: Not Supported 00:08:28.012 Reset Timeout: 7500 ms 00:08:28.012 Doorbell Stride: 4 bytes 00:08:28.012 NVM Subsystem Reset: Not Supported 00:08:28.012 Command Sets Supported 00:08:28.012 NVM Command Set: Supported 00:08:28.012 Boot Partition: Not Supported 00:08:28.012 Memory Page Size Minimum: 4096 bytes 00:08:28.012 Memory Page Size Maximum: 65536 bytes 00:08:28.012 Persistent Memory Region: Not Supported 00:08:28.012 Optional Asynchronous Events Supported 00:08:28.012 Namespace Attribute Notices: Supported 00:08:28.012 Firmware Activation Notices: Not Supported 00:08:28.012 ANA Change Notices: Not Supported 00:08:28.012 PLE Aggregate Log Change Notices: Not Supported 00:08:28.012 LBA Status Info Alert Notices: Not Supported 00:08:28.012 EGE Aggregate Log Change Notices: Not Supported 00:08:28.012 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.012 Zone Descriptor Change Notices: Not Supported 00:08:28.012 Discovery Log Change Notices: Not Supported 00:08:28.012 Controller Attributes 00:08:28.012 128-bit Host Identifier: Not Supported 00:08:28.012 Non-Operational Permissive Mode: Not Supported 00:08:28.012 NVM Sets: Not Supported 00:08:28.012 Read Recovery Levels: Not Supported 00:08:28.012 Endurance Groups: Not Supported 00:08:28.012 Predictable Latency Mode: Not Supported 00:08:28.012 Traffic Based Keep ALive: Not Supported 00:08:28.012 Namespace Granularity: Not Supported 00:08:28.012 SQ Associations: Not Supported 00:08:28.012 UUID List: Not Supported 00:08:28.012 Multi-Domain Subsystem: Not Supported 00:08:28.012 Fixed Capacity Management: Not Supported 00:08:28.012 Variable Capacity Management: Not Supported 00:08:28.012 Delete Endurance Group: Not Supported 00:08:28.012 Delete NVM Set: Not Supported 00:08:28.012 Extended LBA Formats Supported: Supported 00:08:28.012 Flexible Data Placement Supported: Not Supported 00:08:28.012 00:08:28.012 Controller Memory Buffer Support 00:08:28.012 ================================ 00:08:28.012 Supported: No 00:08:28.012 00:08:28.012 Persistent Memory Region Support 00:08:28.012 ================================ 00:08:28.012 Supported: No 00:08:28.012 00:08:28.012 Admin Command Set Attributes 00:08:28.012 ============================ 00:08:28.012 Security Send/Receive: Not Supported 00:08:28.012 Format NVM: Supported 00:08:28.012 Firmware Activate/Download: Not Supported 00:08:28.012 Namespace Management: Supported 00:08:28.012 Device Self-Test: Not Supported 00:08:28.012 Directives: Supported 00:08:28.012 NVMe-MI: Not Supported 00:08:28.012 Virtualization Management: Not Supported 00:08:28.012 Doorbell Buffer Config: Supported 00:08:28.012 Get LBA Status Capability: Not Supported 00:08:28.012 Command & Feature Lockdown Capability: Not Supported 00:08:28.012 Abort Command Limit: 4 00:08:28.012 Async Event Request Limit: 4 00:08:28.012 Number of Firmware Slots: N/A 00:08:28.012 Firmware Slot 1 Read-Only: N/A 00:08:28.012 Firmware Activation Without Reset: N/A 00:08:28.012 Multiple Update Detection Support: N/A 00:08:28.012 Firmware Update Granularity: No Information Provided 00:08:28.012 Per-Namespace SMART Log: Yes 00:08:28.012 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.012 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:28.012 Command Effects Log Page: Supported 00:08:28.012 Get Log Page Extended Data: Supported 00:08:28.012 Telemetry Log Pages: Not Supported 00:08:28.012 Persistent Event Log Pages: Not Supported 00:08:28.012 Supported Log Pages Log Page: May Support 00:08:28.012 Commands Supported & Effects Log Page: Not Supported 00:08:28.012 Feature Identifiers & Effects Log Page:May Support 00:08:28.012 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.012 Data Area 4 for Telemetry Log: Not Supported 00:08:28.012 Error Log Page Entries Supported: 1 00:08:28.012 Keep Alive: Not Supported 00:08:28.012 00:08:28.012 NVM Command Set Attributes 00:08:28.012 ========================== 00:08:28.012 Submission Queue Entry Size 00:08:28.012 Max: 64 00:08:28.012 Min: 64 00:08:28.012 Completion Queue Entry Size 00:08:28.012 Max: 16 00:08:28.012 Min: 16 00:08:28.012 Number of Namespaces: 256 00:08:28.012 Compare Command: Supported 00:08:28.012 Write Uncorrectable Command: Not Supported 00:08:28.012 Dataset Management Command: Supported 00:08:28.012 Write Zeroes Command: Supported 00:08:28.012 Set Features Save Field: Supported 00:08:28.012 Reservations: Not Supported 00:08:28.012 Timestamp: Supported 00:08:28.012 Copy: Supported 00:08:28.012 Volatile Write Cache: Present 00:08:28.012 Atomic Write Unit (Normal): 1 00:08:28.012 Atomic Write Unit (PFail): 1 00:08:28.012 Atomic Compare & Write Unit: 1 00:08:28.012 Fused Compare & Write: Not Supported 00:08:28.012 Scatter-Gather List 00:08:28.012 SGL Command Set: Supported 00:08:28.012 SGL Keyed: Not Supported 00:08:28.012 SGL Bit Bucket Descriptor: Not Supported 00:08:28.012 SGL Metadata Pointer: Not Supported 00:08:28.012 Oversized SGL: Not Supported 00:08:28.012 SGL Metadata Address: Not Supported 00:08:28.012 SGL Offset: Not Supported 00:08:28.012 Transport SGL Data Block: Not Supported 00:08:28.012 Replay Protected Memory Block: Not Supported 00:08:28.012 00:08:28.012 Firmware Slot Information 00:08:28.012 ========================= 00:08:28.012 Active slot: 1 00:08:28.012 Slot 1 Firmware Revision: 1.0 00:08:28.012 00:08:28.012 00:08:28.012 Commands Supported and Effects 00:08:28.012 ============================== 00:08:28.012 Admin Commands 00:08:28.012 -------------- 00:08:28.012 Delete I/O Submission Queue (00h): Supported 00:08:28.012 Create I/O Submission Queue (01h): Supported 00:08:28.012 Get Log Page (02h): Supported 00:08:28.012 Delete I/O Completion Queue (04h): Supported 00:08:28.012 Create I/O Completion Queue (05h): Supported 00:08:28.012 Identify (06h): Supported 00:08:28.012 Abort (08h): Supported 00:08:28.012 Set Features (09h): Supported 00:08:28.012 Get Features (0Ah): Supported 00:08:28.012 Asynchronous Event Request (0Ch): Supported 00:08:28.012 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.012 Directive Send (19h): Supported 00:08:28.012 Directive Receive (1Ah): Supported 00:08:28.012 Virtualization Management (1Ch): Supported 00:08:28.012 Doorbell Buffer Config (7Ch): Supported 00:08:28.012 Format NVM (80h): Supported LBA-Change 00:08:28.012 I/O Commands 00:08:28.012 ------------ 00:08:28.012 Flush (00h): Supported LBA-Change 00:08:28.012 Write (01h): Supported LBA-Change 00:08:28.012 Read (02h): Supported 00:08:28.012 Compare (05h): Supported 00:08:28.012 Write Zeroes (08h): Supported LBA-Change 00:08:28.012 Dataset Management (09h): Supported LBA-Change 00:08:28.012 Unknown (0Ch): Supported 00:08:28.012 Unknown (12h): Supported 00:08:28.012 Copy (19h): Supported LBA-Change 00:08:28.012 Unknown (1Dh): Supported LBA-Change 00:08:28.012 00:08:28.012 Error Log 00:08:28.012 ========= 00:08:28.012 00:08:28.012 Arbitration 00:08:28.012 =========== 00:08:28.012 Arbitration Burst: no limit 00:08:28.012 00:08:28.012 Power Management 00:08:28.012 ================ 00:08:28.012 Number of Power States: 1 00:08:28.012 Current Power State: Power State #0 00:08:28.012 Power State #0: 00:08:28.012 Max Power: 25.00 W 00:08:28.012 Non-Operational State: Operational 00:08:28.012 Entry Latency: 16 microseconds 00:08:28.012 Exit Latency: 4 microseconds 00:08:28.012 Relative Read Throughput: 0 00:08:28.012 Relative Read Latency: 0 00:08:28.012 Relative Write Throughput: 0 00:08:28.012 Relative Write Latency: 0 00:08:28.012 Idle Power: Not Reported 00:08:28.012 Active Power: Not Reported 00:08:28.012 Non-Operational Permissive Mode: Not Supported 00:08:28.012 00:08:28.012 Health Information 00:08:28.012 ================== 00:08:28.012 Critical Warnings: 00:08:28.012 Available Spare Space: OK 00:08:28.013 Temperature: OK 00:08:28.013 Device Reliability: OK 00:08:28.013 Read Only: No 00:08:28.013 Volatile Memory Backup: OK 00:08:28.013 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.013 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.013 Available Spare: 0% 00:08:28.013 Available Spare Threshold: 0% 00:08:28.013 Life Percentage Used: 0% 00:08:28.013 Data Units Read: 656 00:08:28.013 Data Units Written: 584 00:08:28.013 Host Read Commands: 36821 00:08:28.013 Host Write Commands: 36607 00:08:28.013 Controller Busy Time: 0 minutes 00:08:28.013 Power Cycles: 0 00:08:28.013 Power On Hours: 0 hours 00:08:28.013 Unsafe Shutdowns: 0 00:08:28.013 Unrecoverable Media Errors: 0 00:08:28.013 Lifetime Error Log Entries: 0 00:08:28.013 Warning Temperature Time: 0 minutes 00:08:28.013 Critical Temperature Time: 0 minutes 00:08:28.013 00:08:28.013 Number of Queues 00:08:28.013 ================ 00:08:28.013 Number of I/O Submission Queues: 64 00:08:28.013 Number of I/O Completion Queues: 64 00:08:28.013 00:08:28.013 ZNS Specific Controller Data 00:08:28.013 ============================ 00:08:28.013 Zone Append Size Limit: 0 00:08:28.013 00:08:28.013 00:08:28.013 Active Namespaces 00:08:28.013 ================= 00:08:28.013 Namespace ID:1 00:08:28.013 Error Recovery Timeout: Unlimited 00:08:28.013 Command Set Identifier: NVM (00h) 00:08:28.013 Deallocate: Supported 00:08:28.013 Deallocated/Unwritten Error: Supported 00:08:28.013 Deallocated Read Value: All 0x00 00:08:28.013 Deallocate in Write Zeroes: Not Supported 00:08:28.013 Deallocated Guard Field: 0xFFFF 00:08:28.013 Flush: Supported 00:08:28.013 Reservation: Not Supported 00:08:28.013 Metadata Transferred as: Separate Metadata Buffer 00:08:28.013 Namespace Sharing Capabilities: Private 00:08:28.013 Size (in LBAs): 1548666 (5GiB) 00:08:28.013 Capacity (in LBAs): 1548666 (5GiB) 00:08:28.013 Utilization (in LBAs): 1548666 (5GiB) 00:08:28.013 Thin Provisioning: Not Supported 00:08:28.013 Per-NS Atomic Units: No 00:08:28.013 Maximum Single Source Range Length: 128 00:08:28.013 Maximum Copy Length: 128 00:08:28.013 Maximum Source Range Count: 128 00:08:28.013 NGUID/EUI64 Never Reused: No 00:08:28.013 Namespace Write Protected: No 00:08:28.013 Number of LBA Formats: 8 00:08:28.013 Current LBA Format: LBA Format #07 00:08:28.013 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.013 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.013 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.013 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.013 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.013 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.013 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.013 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.013 00:08:28.013 NVM Specific Namespace Data 00:08:28.013 =========================== 00:08:28.013 Logical Block Storage Tag Mask: 0 00:08:28.013 Protection Information Capabilities: 00:08:28.013 16b Guard Protection Information Storage Tag Support: No 00:08:28.013 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.013 Storage Tag Check Read Support: No 00:08:28.013 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.013 ===================================================== 00:08:28.013 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.013 ===================================================== 00:08:28.013 Controller Capabilities/Features 00:08:28.013 ================================ 00:08:28.013 Vendor ID: 1b36 00:08:28.013 Subsystem Vendor ID: 1af4 00:08:28.013 Serial Number: 12341 00:08:28.013 Model Number: QEMU NVMe Ctrl 00:08:28.013 Firmware Version: 8.0.0 00:08:28.013 Recommended Arb Burst: 6 00:08:28.013 IEEE OUI Identifier: 00 54 52 00:08:28.013 Multi-path I/O 00:08:28.013 May have multiple subsystem ports: No 00:08:28.013 May have multiple controllers: No 00:08:28.013 Associated with SR-IOV VF: No 00:08:28.013 Max Data Transfer Size: 524288 00:08:28.013 Max Number of Namespaces: 256 00:08:28.013 Max Number of I/O Queues: 64 00:08:28.013 NVMe Specification Version (VS): 1.4 00:08:28.013 NVMe Specification Version (Identify): 1.4 00:08:28.013 Maximum Queue Entries: 2048 00:08:28.013 Contiguous Queues Required: Yes 00:08:28.013 Arbitration Mechanisms Supported 00:08:28.013 Weighted Round Robin: Not Supported 00:08:28.013 Vendor Specific: Not Supported 00:08:28.013 Reset Timeout: 7500 ms 00:08:28.013 Doorbell Stride: 4 bytes 00:08:28.013 NVM Subsystem Reset: Not Supported 00:08:28.013 Command Sets Supported 00:08:28.013 NVM Command Set: Supported 00:08:28.013 Boot Partition: Not Supported 00:08:28.013 Memory Page Size Minimum: 4096 bytes 00:08:28.013 Memory Page Size Maximum: 65536 bytes 00:08:28.013 Persistent Memory Region: Not Supported 00:08:28.013 Optional Asynchronous Events Supported 00:08:28.013 Namespace Attribute Notices: Supported 00:08:28.013 Firmware Activation Notices: Not Supported 00:08:28.013 ANA Change Notices: Not Supported 00:08:28.013 PLE Aggregate Log Change Notices: Not Supported 00:08:28.013 LBA Status Info Alert Notices: Not Supported 00:08:28.013 EGE Aggregate Log Change Notices: Not Supported 00:08:28.013 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.013 Zone Descriptor Change Notices: Not Supported 00:08:28.013 Discovery Log Change Notices: Not Supported 00:08:28.013 Controller Attributes 00:08:28.013 128-bit Host Identifier: Not Supported 00:08:28.013 Non-Operational Permissive Mode: Not Supported 00:08:28.013 NVM Sets: Not Supported 00:08:28.013 Read Recovery Levels: Not Supported 00:08:28.013 Endurance Groups: Not Supported 00:08:28.013 Predictable Latency Mode: Not Supported 00:08:28.013 Traffic Based Keep ALive: Not Supported 00:08:28.013 Namespace Granularity: Not Supported 00:08:28.013 SQ Associations: Not Supported 00:08:28.013 UUID List: Not Supported 00:08:28.013 Multi-Domain Subsystem: Not Supported 00:08:28.013 Fixed Capacity Management: Not Supported 00:08:28.013 Variable Capacity Management: Not Supported 00:08:28.013 Delete Endurance Group: Not Supported 00:08:28.013 Delete NVM Set: Not Supported 00:08:28.013 Extended LBA Formats Supported: Supported 00:08:28.013 Flexible Data Placement Supported: Not Supported 00:08:28.013 00:08:28.013 Controller Memory Buffer Support 00:08:28.013 ================================ 00:08:28.013 Supported: No 00:08:28.013 00:08:28.013 Persistent Memory Region Support 00:08:28.013 ================================ 00:08:28.013 Supported: No 00:08:28.013 00:08:28.013 Admin Command Set Attributes 00:08:28.013 ============================ 00:08:28.013 Security Send/Receive: Not Supported 00:08:28.013 Format NVM: Supported 00:08:28.013 Firmware Activate/Download: Not Supported 00:08:28.013 Namespace Management: Supported 00:08:28.013 Device Self-Test: Not Supported 00:08:28.013 Directives: Supported 00:08:28.013 NVMe-MI: Not Supported 00:08:28.013 Virtualization Management: Not Supported 00:08:28.013 Doorbell Buffer Config: Supported 00:08:28.013 Get LBA Status Capability: Not Supported 00:08:28.013 Command & Feature Lockdown Capability: Not Supported 00:08:28.013 Abort Command Limit: 4 00:08:28.013 Async Event Request Limit: 4 00:08:28.013 Number of Firmware Slots: N/A 00:08:28.013 Firmware Slot 1 Read-Only: N/A 00:08:28.013 Firmware Activation Without Reset: N/A 00:08:28.013 Multiple Update Detection Support: N/A 00:08:28.013 Firmware Update Granularity: No Information Provided 00:08:28.013 Per-Namespace SMART Log: Yes 00:08:28.013 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.013 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:28.013 Command Effects Log Page: Supported 00:08:28.013 Get Log Page Extended Data: Supported 00:08:28.013 Telemetry Log Pages: Not Supported 00:08:28.013 Persistent Event Log Pages: Not Supported 00:08:28.013 Supported Log Pages Log Page: May Support 00:08:28.013 Commands Supported & Effects Log Page: Not Supported 00:08:28.013 Feature Identifiers & Effects Log Page:May Support 00:08:28.013 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.013 Data Area 4 for Telemetry Log: Not Supported 00:08:28.013 Error Log Page Entries Supported: 1 00:08:28.013 Keep Alive: Not Supported 00:08:28.013 00:08:28.013 NVM Command Set Attributes 00:08:28.013 ========================== 00:08:28.013 Submission Queue Entry Size 00:08:28.013 Max: 64 00:08:28.014 Min: 64 00:08:28.014 Completion Queue Entry Size 00:08:28.014 Max: 16 00:08:28.014 Min: 16 00:08:28.014 Number of Namespaces: 256 00:08:28.014 Compare Command: Supported 00:08:28.014 Write Uncorrectable Command: Not Supported 00:08:28.014 Dataset Management Command: Supported 00:08:28.014 Write Zeroes Command: Supported 00:08:28.014 Set Features Save Field: Supported 00:08:28.014 Reservations: Not Supported 00:08:28.014 Timestamp: Supported 00:08:28.014 Copy: Supported 00:08:28.014 Volatile Write Cache: Present 00:08:28.014 Atomic Write Unit (Normal): 1 00:08:28.014 Atomic Write Unit (PFail): 1 00:08:28.014 Atomic Compare & Write Unit: 1 00:08:28.014 Fused Compare & Write: Not Supported 00:08:28.014 Scatter-Gather List 00:08:28.014 SGL Command Set: Supported 00:08:28.014 SGL Keyed: Not Supported 00:08:28.014 SGL Bit Bucket Descriptor: Not Supported 00:08:28.014 SGL Metadata Pointer: Not Supported 00:08:28.014 Oversized SGL: Not Supported 00:08:28.014 SGL Metadata Address: Not Supported 00:08:28.014 SGL Offset: Not Supported 00:08:28.014 Transport SGL Data Block: Not Supported 00:08:28.014 Replay Protected Memory Block: Not Supported 00:08:28.014 00:08:28.014 Firmware Slot Information 00:08:28.014 ========================= 00:08:28.014 Active slot: 1 00:08:28.014 Slot 1 Firmware Revision: 1.0 00:08:28.014 00:08:28.014 00:08:28.014 Commands Supported and Effects 00:08:28.014 ============================== 00:08:28.014 Admin Commands 00:08:28.014 -------------- 00:08:28.014 Delete I/O Submission Queue (00h): Supported 00:08:28.014 Create I/O Submission Queue (01h): Supported 00:08:28.014 Get Log Page (02h): Supported 00:08:28.014 Delete I/O Completion Queue (04h): Supported 00:08:28.014 Create I/O Completion Queue (05h): Supported 00:08:28.014 Identify (06h): Supported 00:08:28.014 Abort (08h): Supported 00:08:28.014 Set Features (09h): Supported 00:08:28.014 Get Features (0Ah): Supported 00:08:28.014 Asynchronous Event Request (0Ch): Supported 00:08:28.014 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.014 Directive Send (19h): Supported 00:08:28.014 Directive Receive (1Ah): Supported 00:08:28.014 Virtualization Management (1Ch): Supported 00:08:28.014 Doorbell Buffer Config (7Ch): Supported 00:08:28.014 Format NVM (80h): Supported LBA-Change 00:08:28.014 I/O Commands 00:08:28.014 ------------ 00:08:28.014 Flush (00h): Supported LBA-Change 00:08:28.014 Write (01h): Supported LBA-Change 00:08:28.014 Read (02h): Supported 00:08:28.014 Compare (05h): Supported 00:08:28.014 Write Zeroes (08h): Supported LBA-Change 00:08:28.014 Dataset Management (09h): Supported LBA-Change 00:08:28.014 Unknown (0Ch): Supported 00:08:28.014 Unknown (12h): Supported 00:08:28.014 Copy (19h): Supported LBA-Change 00:08:28.014 Unknown (1Dh): Supported LBA-Change 00:08:28.014 00:08:28.014 Error Log 00:08:28.014 ========= 00:08:28.014 00:08:28.014 Arbitration 00:08:28.014 =========== 00:08:28.014 Arbitration Burst: no limit 00:08:28.014 00:08:28.014 Power Management 00:08:28.014 ================ 00:08:28.014 Number of Power States: 1 00:08:28.014 Current Power State: Power State #0 00:08:28.014 Power State #0: 00:08:28.014 Max Power: 25.00 W 00:08:28.014 Non-Operational State: Operational 00:08:28.014 Entry Latency: 16 microseconds 00:08:28.014 Exit Latency: 4 microseconds 00:08:28.014 Relative Read Throughput: 0 00:08:28.014 Relative Read Latency: 0 00:08:28.014 Relative Write Throughput: 0 00:08:28.014 Relative Write Latency: 0 00:08:28.014 Idle Power: Not Reported 00:08:28.014 Active Power: Not Reported 00:08:28.014 Non-Operational Permissive Mode: Not Supported 00:08:28.014 00:08:28.014 Health Information 00:08:28.014 ================== 00:08:28.014 Critical Warnings: 00:08:28.014 Available Spare Space: OK 00:08:28.014 Temperature: OK 00:08:28.014 Device Reliability: OK 00:08:28.014 Read Only: No 00:08:28.014 Volatile Memory Backup: OK 00:08:28.014 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.014 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.014 Available Spare: 0% 00:08:28.014 Available Spare Threshold: 0% 00:08:28.014 Life Percentage Used: 0% 00:08:28.014 Data Units Read: 993 00:08:28.014 Data Units Written: 866 00:08:28.014 Host Read Commands: 52835 00:08:28.014 Host Write Commands: 51719 00:08:28.014 Controller Busy Time: 0 minutes 00:08:28.014 Power Cycles: 0 00:08:28.014 Power On Hours: 0 hours 00:08:28.014 Unsafe Shutdowns: 0 00:08:28.014 Unrecoverable Media Errors: 0 00:08:28.014 Lifetime Error Log Entries: 0 00:08:28.014 Warning Temperature Time: 0 minutes 00:08:28.014 Critical Temperature Time: 0 minutes 00:08:28.014 00:08:28.014 Number of Queues 00:08:28.014 ================ 00:08:28.014 Number of I/O Submission Queues: 64 00:08:28.014 Number of I/O Completion Queues: 64 00:08:28.014 00:08:28.014 ZNS Specific Controller Data 00:08:28.014 ============================ 00:08:28.014 Zone Append Size Limit: 0 00:08:28.014 00:08:28.014 00:08:28.014 Active Namespaces 00:08:28.014 ================= 00:08:28.014 Namespace ID:1 00:08:28.014 Error Recovery Timeout: Unlimited 00:08:28.014 Command Set Identifier: NVM (00h) 00:08:28.014 Deallocate: Supported 00:08:28.014 Deallocated/Unwritten Error: Supported 00:08:28.014 Deallocated Read Value: All 0x00 00:08:28.014 Deallocate in Write Zeroes: Not Supported 00:08:28.014 Deallocated Guard Field: 0xFFFF 00:08:28.014 Flush: Supported 00:08:28.014 Reservation: Not Supported 00:08:28.014 Namespace Sharing Capabilities: Private 00:08:28.014 Size (in LBAs): 1310720 (5GiB) 00:08:28.014 Capacity (in LBAs): 1310720 (5GiB) 00:08:28.014 Utilization (in LBAs): 1310720 (5GiB) 00:08:28.014 Thin Provisioning: Not Supported 00:08:28.014 Per-NS Atomic Units: No 00:08:28.014 Maximum Single Source Range Length: 128 00:08:28.014 Maximum Copy Length: 128 00:08:28.014 Maximum Source Range Count: 128 00:08:28.014 NGUID/EUI64 Never Reused: No 00:08:28.014 Namespace Write Protected: No 00:08:28.014 Number of LBA Formats: 8 00:08:28.014 Current LBA Format: LBA Format #04 00:08:28.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.014 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.014 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.014 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.014 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.014 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.014 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.014 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.014 00:08:28.014 NVM Specific Namespace Data 00:08:28.014 =========================== 00:08:28.014 Logical Block Storage Tag Mask: 0 00:08:28.014 Protection Information Capabilities: 00:08:28.014 16b Guard Protection Information Storage Tag Support: No 00:08:28.014 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.014 Storage Tag Check Read Support: No 00:08:28.014 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.014 ===================================================== 00:08:28.014 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:28.014 ===================================================== 00:08:28.014 Controller Capabilities/Features 00:08:28.014 ================================ 00:08:28.014 Vendor ID: 1b36 00:08:28.014 Subsystem Vendor ID: 1af4 00:08:28.014 Serial Number: 12343 00:08:28.014 Model Number: QEMU NVMe Ctrl 00:08:28.014 Firmware Version: 8.0.0 00:08:28.014 Recommended Arb Burst: 6 00:08:28.014 IEEE OUI Identifier: 00 54 52 00:08:28.014 Mul[2024-11-20 15:56:26.166343] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62897 terminated unexpected 00:08:28.014 [2024-11-20 15:56:26.167328] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62897 terminated unexpected 00:08:28.014 [2024-11-20 15:56:26.167880] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62897 terminated unexpected 00:08:28.014 ti-path I/O 00:08:28.014 May have multiple subsystem ports: No 00:08:28.014 May have multiple controllers: Yes 00:08:28.014 Associated with SR-IOV VF: No 00:08:28.014 Max Data Transfer Size: 524288 00:08:28.014 Max Number of Namespaces: 256 00:08:28.014 Max Number of I/O Queues: 64 00:08:28.014 NVMe Specification Version (VS): 1.4 00:08:28.014 NVMe Specification Version (Identify): 1.4 00:08:28.015 Maximum Queue Entries: 2048 00:08:28.015 Contiguous Queues Required: Yes 00:08:28.015 Arbitration Mechanisms Supported 00:08:28.015 Weighted Round Robin: Not Supported 00:08:28.015 Vendor Specific: Not Supported 00:08:28.015 Reset Timeout: 7500 ms 00:08:28.015 Doorbell Stride: 4 bytes 00:08:28.015 NVM Subsystem Reset: Not Supported 00:08:28.015 Command Sets Supported 00:08:28.015 NVM Command Set: Supported 00:08:28.015 Boot Partition: Not Supported 00:08:28.015 Memory Page Size Minimum: 4096 bytes 00:08:28.015 Memory Page Size Maximum: 65536 bytes 00:08:28.015 Persistent Memory Region: Not Supported 00:08:28.015 Optional Asynchronous Events Supported 00:08:28.015 Namespace Attribute Notices: Supported 00:08:28.015 Firmware Activation Notices: Not Supported 00:08:28.015 ANA Change Notices: Not Supported 00:08:28.015 PLE Aggregate Log Change Notices: Not Supported 00:08:28.015 LBA Status Info Alert Notices: Not Supported 00:08:28.015 EGE Aggregate Log Change Notices: Not Supported 00:08:28.015 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.015 Zone Descriptor Change Notices: Not Supported 00:08:28.015 Discovery Log Change Notices: Not Supported 00:08:28.015 Controller Attributes 00:08:28.015 128-bit Host Identifier: Not Supported 00:08:28.015 Non-Operational Permissive Mode: Not Supported 00:08:28.015 NVM Sets: Not Supported 00:08:28.015 Read Recovery Levels: Not Supported 00:08:28.015 Endurance Groups: Supported 00:08:28.015 Predictable Latency Mode: Not Supported 00:08:28.015 Traffic Based Keep ALive: Not Supported 00:08:28.015 Namespace Granularity: Not Supported 00:08:28.015 SQ Associations: Not Supported 00:08:28.015 UUID List: Not Supported 00:08:28.015 Multi-Domain Subsystem: Not Supported 00:08:28.015 Fixed Capacity Management: Not Supported 00:08:28.015 Variable Capacity Management: Not Supported 00:08:28.015 Delete Endurance Group: Not Supported 00:08:28.015 Delete NVM Set: Not Supported 00:08:28.015 Extended LBA Formats Supported: Supported 00:08:28.015 Flexible Data Placement Supported: Supported 00:08:28.015 00:08:28.015 Controller Memory Buffer Support 00:08:28.015 ================================ 00:08:28.015 Supported: No 00:08:28.015 00:08:28.015 Persistent Memory Region Support 00:08:28.015 ================================ 00:08:28.015 Supported: No 00:08:28.015 00:08:28.015 Admin Command Set Attributes 00:08:28.015 ============================ 00:08:28.015 Security Send/Receive: Not Supported 00:08:28.015 Format NVM: Supported 00:08:28.015 Firmware Activate/Download: Not Supported 00:08:28.015 Namespace Management: Supported 00:08:28.015 Device Self-Test: Not Supported 00:08:28.015 Directives: Supported 00:08:28.015 NVMe-MI: Not Supported 00:08:28.015 Virtualization Management: Not Supported 00:08:28.015 Doorbell Buffer Config: Supported 00:08:28.015 Get LBA Status Capability: Not Supported 00:08:28.015 Command & Feature Lockdown Capability: Not Supported 00:08:28.015 Abort Command Limit: 4 00:08:28.015 Async Event Request Limit: 4 00:08:28.015 Number of Firmware Slots: N/A 00:08:28.015 Firmware Slot 1 Read-Only: N/A 00:08:28.015 Firmware Activation Without Reset: N/A 00:08:28.015 Multiple Update Detection Support: N/A 00:08:28.015 Firmware Update Granularity: No Information Provided 00:08:28.015 Per-Namespace SMART Log: Yes 00:08:28.015 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.015 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:28.015 Command Effects Log Page: Supported 00:08:28.015 Get Log Page Extended Data: Supported 00:08:28.015 Telemetry Log Pages: Not Supported 00:08:28.015 Persistent Event Log Pages: Not Supported 00:08:28.015 Supported Log Pages Log Page: May Support 00:08:28.015 Commands Supported & Effects Log Page: Not Supported 00:08:28.015 Feature Identifiers & Effects Log Page:May Support 00:08:28.015 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.015 Data Area 4 for Telemetry Log: Not Supported 00:08:28.015 Error Log Page Entries Supported: 1 00:08:28.015 Keep Alive: Not Supported 00:08:28.015 00:08:28.015 NVM Command Set Attributes 00:08:28.015 ========================== 00:08:28.015 Submission Queue Entry Size 00:08:28.015 Max: 64 00:08:28.015 Min: 64 00:08:28.015 Completion Queue Entry Size 00:08:28.015 Max: 16 00:08:28.015 Min: 16 00:08:28.015 Number of Namespaces: 256 00:08:28.015 Compare Command: Supported 00:08:28.015 Write Uncorrectable Command: Not Supported 00:08:28.015 Dataset Management Command: Supported 00:08:28.015 Write Zeroes Command: Supported 00:08:28.015 Set Features Save Field: Supported 00:08:28.015 Reservations: Not Supported 00:08:28.015 Timestamp: Supported 00:08:28.015 Copy: Supported 00:08:28.015 Volatile Write Cache: Present 00:08:28.015 Atomic Write Unit (Normal): 1 00:08:28.015 Atomic Write Unit (PFail): 1 00:08:28.015 Atomic Compare & Write Unit: 1 00:08:28.015 Fused Compare & Write: Not Supported 00:08:28.015 Scatter-Gather List 00:08:28.015 SGL Command Set: Supported 00:08:28.015 SGL Keyed: Not Supported 00:08:28.015 SGL Bit Bucket Descriptor: Not Supported 00:08:28.015 SGL Metadata Pointer: Not Supported 00:08:28.015 Oversized SGL: Not Supported 00:08:28.015 SGL Metadata Address: Not Supported 00:08:28.015 SGL Offset: Not Supported 00:08:28.015 Transport SGL Data Block: Not Supported 00:08:28.015 Replay Protected Memory Block: Not Supported 00:08:28.015 00:08:28.015 Firmware Slot Information 00:08:28.015 ========================= 00:08:28.015 Active slot: 1 00:08:28.015 Slot 1 Firmware Revision: 1.0 00:08:28.015 00:08:28.015 00:08:28.015 Commands Supported and Effects 00:08:28.015 ============================== 00:08:28.015 Admin Commands 00:08:28.015 -------------- 00:08:28.015 Delete I/O Submission Queue (00h): Supported 00:08:28.015 Create I/O Submission Queue (01h): Supported 00:08:28.015 Get Log Page (02h): Supported 00:08:28.015 Delete I/O Completion Queue (04h): Supported 00:08:28.015 Create I/O Completion Queue (05h): Supported 00:08:28.015 Identify (06h): Supported 00:08:28.015 Abort (08h): Supported 00:08:28.015 Set Features (09h): Supported 00:08:28.015 Get Features (0Ah): Supported 00:08:28.015 Asynchronous Event Request (0Ch): Supported 00:08:28.015 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.015 Directive Send (19h): Supported 00:08:28.015 Directive Receive (1Ah): Supported 00:08:28.015 Virtualization Management (1Ch): Supported 00:08:28.015 Doorbell Buffer Config (7Ch): Supported 00:08:28.015 Format NVM (80h): Supported LBA-Change 00:08:28.015 I/O Commands 00:08:28.015 ------------ 00:08:28.015 Flush (00h): Supported LBA-Change 00:08:28.015 Write (01h): Supported LBA-Change 00:08:28.015 Read (02h): Supported 00:08:28.015 Compare (05h): Supported 00:08:28.015 Write Zeroes (08h): Supported LBA-Change 00:08:28.015 Dataset Management (09h): Supported LBA-Change 00:08:28.015 Unknown (0Ch): Supported 00:08:28.015 Unknown (12h): Supported 00:08:28.015 Copy (19h): Supported LBA-Change 00:08:28.015 Unknown (1Dh): Supported LBA-Change 00:08:28.015 00:08:28.015 Error Log 00:08:28.015 ========= 00:08:28.015 00:08:28.015 Arbitration 00:08:28.015 =========== 00:08:28.015 Arbitration Burst: no limit 00:08:28.015 00:08:28.015 Power Management 00:08:28.015 ================ 00:08:28.015 Number of Power States: 1 00:08:28.015 Current Power State: Power State #0 00:08:28.015 Power State #0: 00:08:28.015 Max Power: 25.00 W 00:08:28.015 Non-Operational State: Operational 00:08:28.015 Entry Latency: 16 microseconds 00:08:28.015 Exit Latency: 4 microseconds 00:08:28.015 Relative Read Throughput: 0 00:08:28.015 Relative Read Latency: 0 00:08:28.015 Relative Write Throughput: 0 00:08:28.015 Relative Write Latency: 0 00:08:28.015 Idle Power: Not Reported 00:08:28.015 Active Power: Not Reported 00:08:28.015 Non-Operational Permissive Mode: Not Supported 00:08:28.015 00:08:28.015 Health Information 00:08:28.015 ================== 00:08:28.015 Critical Warnings: 00:08:28.015 Available Spare Space: OK 00:08:28.015 Temperature: OK 00:08:28.015 Device Reliability: OK 00:08:28.015 Read Only: No 00:08:28.015 Volatile Memory Backup: OK 00:08:28.015 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.015 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.015 Available Spare: 0% 00:08:28.015 Available Spare Threshold: 0% 00:08:28.015 Life Percentage Used: 0% 00:08:28.015 Data Units Read: 766 00:08:28.015 Data Units Written: 695 00:08:28.015 Host Read Commands: 38039 00:08:28.015 Host Write Commands: 37462 00:08:28.015 Controller Busy Time: 0 minutes 00:08:28.015 Power Cycles: 0 00:08:28.015 Power On Hours: 0 hours 00:08:28.015 Unsafe Shutdowns: 0 00:08:28.016 Unrecoverable Media Errors: 0 00:08:28.016 Lifetime Error Log Entries: 0 00:08:28.016 Warning Temperature Time: 0 minutes 00:08:28.016 Critical Temperature Time: 0 minutes 00:08:28.016 00:08:28.016 Number of Queues 00:08:28.016 ================ 00:08:28.016 Number of I/O Submission Queues: 64 00:08:28.016 Number of I/O Completion Queues: 64 00:08:28.016 00:08:28.016 ZNS Specific Controller Data 00:08:28.016 ============================ 00:08:28.016 Zone Append Size Limit: 0 00:08:28.016 00:08:28.016 00:08:28.016 Active Namespaces 00:08:28.016 ================= 00:08:28.016 Namespace ID:1 00:08:28.016 Error Recovery Timeout: Unlimited 00:08:28.016 Command Set Identifier: NVM (00h) 00:08:28.016 Deallocate: Supported 00:08:28.016 Deallocated/Unwritten Error: Supported 00:08:28.016 Deallocated Read Value: All 0x00 00:08:28.016 Deallocate in Write Zeroes: Not Supported 00:08:28.016 Deallocated Guard Field: 0xFFFF 00:08:28.016 Flush: Supported 00:08:28.016 Reservation: Not Supported 00:08:28.016 Namespace Sharing Capabilities: Multiple Controllers 00:08:28.016 Size (in LBAs): 262144 (1GiB) 00:08:28.016 Capacity (in LBAs): 262144 (1GiB) 00:08:28.016 Utilization (in LBAs): 262144 (1GiB) 00:08:28.016 Thin Provisioning: Not Supported 00:08:28.016 Per-NS Atomic Units: No 00:08:28.016 Maximum Single Source Range Length: 128 00:08:28.016 Maximum Copy Length: 128 00:08:28.016 Maximum Source Range Count: 128 00:08:28.016 NGUID/EUI64 Never Reused: No 00:08:28.016 Namespace Write Protected: No 00:08:28.016 Endurance group ID: 1 00:08:28.016 Number of LBA Formats: 8 00:08:28.016 Current LBA Format: LBA Format #04 00:08:28.016 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.016 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.016 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.016 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.016 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.016 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.016 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.016 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.016 00:08:28.016 Get Feature FDP: 00:08:28.016 ================ 00:08:28.016 Enabled: Yes 00:08:28.016 FDP configuration index: 0 00:08:28.016 00:08:28.016 FDP configurations log page 00:08:28.016 =========================== 00:08:28.016 Number of FDP configurations: 1 00:08:28.016 Version: 0 00:08:28.016 Size: 112 00:08:28.016 FDP Configuration Descriptor: 0 00:08:28.016 Descriptor Size: 96 00:08:28.016 Reclaim Group Identifier format: 2 00:08:28.016 FDP Volatile Write Cache: Not Present 00:08:28.016 FDP Configuration: Valid 00:08:28.016 Vendor Specific Size: 0 00:08:28.016 Number of Reclaim Groups: 2 00:08:28.016 Number of Recalim Unit Handles: 8 00:08:28.016 Max Placement Identifiers: 128 00:08:28.016 Number of Namespaces Suppprted: 256 00:08:28.016 Reclaim unit Nominal Size: 6000000 bytes 00:08:28.016 Estimated Reclaim Unit Time Limit: Not Reported 00:08:28.016 RUH Desc #000: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #001: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #002: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #003: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #004: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #005: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #006: RUH Type: Initially Isolated 00:08:28.016 RUH Desc #007: RUH Type: Initially Isolated 00:08:28.016 00:08:28.016 FDP reclaim unit handle usage log page 00:08:28.016 ====================================== 00:08:28.016 Number of Reclaim Unit Handles: 8 00:08:28.016 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:28.016 RUH Usage Desc #001: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #002: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #003: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #004: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #005: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #006: RUH Attributes: Unused 00:08:28.016 RUH Usage Desc #007: RUH Attributes: Unused 00:08:28.016 00:08:28.016 FDP statistics log page 00:08:28.016 ======================= 00:08:28.016 Host bytes with metadata written: 440901632 00:08:28.016 Media[2024-11-20 15:56:26.169558] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62897 terminated unexpected 00:08:28.016 bytes with metadata written: 440954880 00:08:28.016 Media bytes erased: 0 00:08:28.016 00:08:28.016 FDP events log page 00:08:28.016 =================== 00:08:28.016 Number of FDP events: 0 00:08:28.016 00:08:28.016 NVM Specific Namespace Data 00:08:28.016 =========================== 00:08:28.016 Logical Block Storage Tag Mask: 0 00:08:28.016 Protection Information Capabilities: 00:08:28.016 16b Guard Protection Information Storage Tag Support: No 00:08:28.016 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.016 Storage Tag Check Read Support: No 00:08:28.016 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.016 ===================================================== 00:08:28.016 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.016 ===================================================== 00:08:28.016 Controller Capabilities/Features 00:08:28.016 ================================ 00:08:28.016 Vendor ID: 1b36 00:08:28.016 Subsystem Vendor ID: 1af4 00:08:28.016 Serial Number: 12342 00:08:28.016 Model Number: QEMU NVMe Ctrl 00:08:28.016 Firmware Version: 8.0.0 00:08:28.016 Recommended Arb Burst: 6 00:08:28.016 IEEE OUI Identifier: 00 54 52 00:08:28.016 Multi-path I/O 00:08:28.016 May have multiple subsystem ports: No 00:08:28.016 May have multiple controllers: No 00:08:28.016 Associated with SR-IOV VF: No 00:08:28.016 Max Data Transfer Size: 524288 00:08:28.016 Max Number of Namespaces: 256 00:08:28.016 Max Number of I/O Queues: 64 00:08:28.016 NVMe Specification Version (VS): 1.4 00:08:28.016 NVMe Specification Version (Identify): 1.4 00:08:28.016 Maximum Queue Entries: 2048 00:08:28.016 Contiguous Queues Required: Yes 00:08:28.016 Arbitration Mechanisms Supported 00:08:28.016 Weighted Round Robin: Not Supported 00:08:28.016 Vendor Specific: Not Supported 00:08:28.016 Reset Timeout: 7500 ms 00:08:28.016 Doorbell Stride: 4 bytes 00:08:28.016 NVM Subsystem Reset: Not Supported 00:08:28.016 Command Sets Supported 00:08:28.016 NVM Command Set: Supported 00:08:28.016 Boot Partition: Not Supported 00:08:28.016 Memory Page Size Minimum: 4096 bytes 00:08:28.016 Memory Page Size Maximum: 65536 bytes 00:08:28.016 Persistent Memory Region: Not Supported 00:08:28.016 Optional Asynchronous Events Supported 00:08:28.016 Namespace Attribute Notices: Supported 00:08:28.016 Firmware Activation Notices: Not Supported 00:08:28.016 ANA Change Notices: Not Supported 00:08:28.016 PLE Aggregate Log Change Notices: Not Supported 00:08:28.016 LBA Status Info Alert Notices: Not Supported 00:08:28.016 EGE Aggregate Log Change Notices: Not Supported 00:08:28.016 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.016 Zone Descriptor Change Notices: Not Supported 00:08:28.016 Discovery Log Change Notices: Not Supported 00:08:28.016 Controller Attributes 00:08:28.016 128-bit Host Identifier: Not Supported 00:08:28.016 Non-Operational Permissive Mode: Not Supported 00:08:28.016 NVM Sets: Not Supported 00:08:28.016 Read Recovery Levels: Not Supported 00:08:28.017 Endurance Groups: Not Supported 00:08:28.017 Predictable Latency Mode: Not Supported 00:08:28.017 Traffic Based Keep ALive: Not Supported 00:08:28.017 Namespace Granularity: Not Supported 00:08:28.017 SQ Associations: Not Supported 00:08:28.017 UUID List: Not Supported 00:08:28.017 Multi-Domain Subsystem: Not Supported 00:08:28.017 Fixed Capacity Management: Not Supported 00:08:28.017 Variable Capacity Management: Not Supported 00:08:28.017 Delete Endurance Group: Not Supported 00:08:28.017 Delete NVM Set: Not Supported 00:08:28.017 Extended LBA Formats Supported: Supported 00:08:28.017 Flexible Data Placement Supported: Not Supported 00:08:28.017 00:08:28.017 Controller Memory Buffer Support 00:08:28.017 ================================ 00:08:28.017 Supported: No 00:08:28.017 00:08:28.017 Persistent Memory Region Support 00:08:28.017 ================================ 00:08:28.017 Supported: No 00:08:28.017 00:08:28.017 Admin Command Set Attributes 00:08:28.017 ============================ 00:08:28.017 Security Send/Receive: Not Supported 00:08:28.017 Format NVM: Supported 00:08:28.017 Firmware Activate/Download: Not Supported 00:08:28.017 Namespace Management: Supported 00:08:28.017 Device Self-Test: Not Supported 00:08:28.017 Directives: Supported 00:08:28.017 NVMe-MI: Not Supported 00:08:28.017 Virtualization Management: Not Supported 00:08:28.017 Doorbell Buffer Config: Supported 00:08:28.017 Get LBA Status Capability: Not Supported 00:08:28.017 Command & Feature Lockdown Capability: Not Supported 00:08:28.017 Abort Command Limit: 4 00:08:28.017 Async Event Request Limit: 4 00:08:28.017 Number of Firmware Slots: N/A 00:08:28.017 Firmware Slot 1 Read-Only: N/A 00:08:28.017 Firmware Activation Without Reset: N/A 00:08:28.017 Multiple Update Detection Support: N/A 00:08:28.017 Firmware Update Granularity: No Information Provided 00:08:28.017 Per-Namespace SMART Log: Yes 00:08:28.017 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.017 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:28.017 Command Effects Log Page: Supported 00:08:28.017 Get Log Page Extended Data: Supported 00:08:28.017 Telemetry Log Pages: Not Supported 00:08:28.017 Persistent Event Log Pages: Not Supported 00:08:28.017 Supported Log Pages Log Page: May Support 00:08:28.017 Commands Supported & Effects Log Page: Not Supported 00:08:28.017 Feature Identifiers & Effects Log Page:May Support 00:08:28.017 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.017 Data Area 4 for Telemetry Log: Not Supported 00:08:28.017 Error Log Page Entries Supported: 1 00:08:28.017 Keep Alive: Not Supported 00:08:28.017 00:08:28.017 NVM Command Set Attributes 00:08:28.017 ========================== 00:08:28.017 Submission Queue Entry Size 00:08:28.017 Max: 64 00:08:28.017 Min: 64 00:08:28.017 Completion Queue Entry Size 00:08:28.017 Max: 16 00:08:28.017 Min: 16 00:08:28.017 Number of Namespaces: 256 00:08:28.017 Compare Command: Supported 00:08:28.017 Write Uncorrectable Command: Not Supported 00:08:28.017 Dataset Management Command: Supported 00:08:28.017 Write Zeroes Command: Supported 00:08:28.017 Set Features Save Field: Supported 00:08:28.017 Reservations: Not Supported 00:08:28.017 Timestamp: Supported 00:08:28.017 Copy: Supported 00:08:28.017 Volatile Write Cache: Present 00:08:28.017 Atomic Write Unit (Normal): 1 00:08:28.017 Atomic Write Unit (PFail): 1 00:08:28.017 Atomic Compare & Write Unit: 1 00:08:28.017 Fused Compare & Write: Not Supported 00:08:28.017 Scatter-Gather List 00:08:28.017 SGL Command Set: Supported 00:08:28.017 SGL Keyed: Not Supported 00:08:28.017 SGL Bit Bucket Descriptor: Not Supported 00:08:28.017 SGL Metadata Pointer: Not Supported 00:08:28.017 Oversized SGL: Not Supported 00:08:28.017 SGL Metadata Address: Not Supported 00:08:28.017 SGL Offset: Not Supported 00:08:28.017 Transport SGL Data Block: Not Supported 00:08:28.017 Replay Protected Memory Block: Not Supported 00:08:28.017 00:08:28.017 Firmware Slot Information 00:08:28.017 ========================= 00:08:28.017 Active slot: 1 00:08:28.017 Slot 1 Firmware Revision: 1.0 00:08:28.017 00:08:28.017 00:08:28.017 Commands Supported and Effects 00:08:28.017 ============================== 00:08:28.017 Admin Commands 00:08:28.017 -------------- 00:08:28.017 Delete I/O Submission Queue (00h): Supported 00:08:28.017 Create I/O Submission Queue (01h): Supported 00:08:28.017 Get Log Page (02h): Supported 00:08:28.017 Delete I/O Completion Queue (04h): Supported 00:08:28.017 Create I/O Completion Queue (05h): Supported 00:08:28.017 Identify (06h): Supported 00:08:28.017 Abort (08h): Supported 00:08:28.017 Set Features (09h): Supported 00:08:28.017 Get Features (0Ah): Supported 00:08:28.017 Asynchronous Event Request (0Ch): Supported 00:08:28.017 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.017 Directive Send (19h): Supported 00:08:28.017 Directive Receive (1Ah): Supported 00:08:28.017 Virtualization Management (1Ch): Supported 00:08:28.017 Doorbell Buffer Config (7Ch): Supported 00:08:28.017 Format NVM (80h): Supported LBA-Change 00:08:28.017 I/O Commands 00:08:28.017 ------------ 00:08:28.017 Flush (00h): Supported LBA-Change 00:08:28.017 Write (01h): Supported LBA-Change 00:08:28.017 Read (02h): Supported 00:08:28.017 Compare (05h): Supported 00:08:28.017 Write Zeroes (08h): Supported LBA-Change 00:08:28.017 Dataset Management (09h): Supported LBA-Change 00:08:28.017 Unknown (0Ch): Supported 00:08:28.017 Unknown (12h): Supported 00:08:28.017 Copy (19h): Supported LBA-Change 00:08:28.017 Unknown (1Dh): Supported LBA-Change 00:08:28.017 00:08:28.017 Error Log 00:08:28.017 ========= 00:08:28.017 00:08:28.017 Arbitration 00:08:28.017 =========== 00:08:28.017 Arbitration Burst: no limit 00:08:28.017 00:08:28.017 Power Management 00:08:28.017 ================ 00:08:28.017 Number of Power States: 1 00:08:28.017 Current Power State: Power State #0 00:08:28.017 Power State #0: 00:08:28.017 Max Power: 25.00 W 00:08:28.017 Non-Operational State: Operational 00:08:28.017 Entry Latency: 16 microseconds 00:08:28.017 Exit Latency: 4 microseconds 00:08:28.017 Relative Read Throughput: 0 00:08:28.017 Relative Read Latency: 0 00:08:28.017 Relative Write Throughput: 0 00:08:28.017 Relative Write Latency: 0 00:08:28.017 Idle Power: Not Reported 00:08:28.017 Active Power: Not Reported 00:08:28.017 Non-Operational Permissive Mode: Not Supported 00:08:28.017 00:08:28.017 Health Information 00:08:28.017 ================== 00:08:28.017 Critical Warnings: 00:08:28.017 Available Spare Space: OK 00:08:28.017 Temperature: OK 00:08:28.017 Device Reliability: OK 00:08:28.017 Read Only: No 00:08:28.017 Volatile Memory Backup: OK 00:08:28.017 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.017 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.017 Available Spare: 0% 00:08:28.017 Available Spare Threshold: 0% 00:08:28.017 Life Percentage Used: 0% 00:08:28.017 Data Units Read: 2103 00:08:28.017 Data Units Written: 1890 00:08:28.017 Host Read Commands: 112371 00:08:28.017 Host Write Commands: 110640 00:08:28.017 Controller Busy Time: 0 minutes 00:08:28.017 Power Cycles: 0 00:08:28.017 Power On Hours: 0 hours 00:08:28.017 Unsafe Shutdowns: 0 00:08:28.017 Unrecoverable Media Errors: 0 00:08:28.017 Lifetime Error Log Entries: 0 00:08:28.017 Warning Temperature Time: 0 minutes 00:08:28.017 Critical Temperature Time: 0 minutes 00:08:28.017 00:08:28.017 Number of Queues 00:08:28.017 ================ 00:08:28.017 Number of I/O Submission Queues: 64 00:08:28.017 Number of I/O Completion Queues: 64 00:08:28.017 00:08:28.017 ZNS Specific Controller Data 00:08:28.017 ============================ 00:08:28.017 Zone Append Size Limit: 0 00:08:28.017 00:08:28.017 00:08:28.017 Active Namespaces 00:08:28.017 ================= 00:08:28.017 Namespace ID:1 00:08:28.017 Error Recovery Timeout: Unlimited 00:08:28.017 Command Set Identifier: NVM (00h) 00:08:28.017 Deallocate: Supported 00:08:28.017 Deallocated/Unwritten Error: Supported 00:08:28.017 Deallocated Read Value: All 0x00 00:08:28.017 Deallocate in Write Zeroes: Not Supported 00:08:28.017 Deallocated Guard Field: 0xFFFF 00:08:28.017 Flush: Supported 00:08:28.017 Reservation: Not Supported 00:08:28.017 Namespace Sharing Capabilities: Private 00:08:28.017 Size (in LBAs): 1048576 (4GiB) 00:08:28.017 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.017 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.017 Thin Provisioning: Not Supported 00:08:28.017 Per-NS Atomic Units: No 00:08:28.017 Maximum Single Source Range Length: 128 00:08:28.017 Maximum Copy Length: 128 00:08:28.018 Maximum Source Range Count: 128 00:08:28.018 NGUID/EUI64 Never Reused: No 00:08:28.018 Namespace Write Protected: No 00:08:28.018 Number of LBA Formats: 8 00:08:28.018 Current LBA Format: LBA Format #04 00:08:28.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.018 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.018 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.018 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.018 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.018 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.018 00:08:28.018 NVM Specific Namespace Data 00:08:28.018 =========================== 00:08:28.018 Logical Block Storage Tag Mask: 0 00:08:28.018 Protection Information Capabilities: 00:08:28.018 16b Guard Protection Information Storage Tag Support: No 00:08:28.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.018 Storage Tag Check Read Support: No 00:08:28.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Namespace ID:2 00:08:28.018 Error Recovery Timeout: Unlimited 00:08:28.018 Command Set Identifier: NVM (00h) 00:08:28.018 Deallocate: Supported 00:08:28.018 Deallocated/Unwritten Error: Supported 00:08:28.018 Deallocated Read Value: All 0x00 00:08:28.018 Deallocate in Write Zeroes: Not Supported 00:08:28.018 Deallocated Guard Field: 0xFFFF 00:08:28.018 Flush: Supported 00:08:28.018 Reservation: Not Supported 00:08:28.018 Namespace Sharing Capabilities: Private 00:08:28.018 Size (in LBAs): 1048576 (4GiB) 00:08:28.018 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.018 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.018 Thin Provisioning: Not Supported 00:08:28.018 Per-NS Atomic Units: No 00:08:28.018 Maximum Single Source Range Length: 128 00:08:28.018 Maximum Copy Length: 128 00:08:28.018 Maximum Source Range Count: 128 00:08:28.018 NGUID/EUI64 Never Reused: No 00:08:28.018 Namespace Write Protected: No 00:08:28.018 Number of LBA Formats: 8 00:08:28.018 Current LBA Format: LBA Format #04 00:08:28.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.018 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.018 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.018 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.018 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.018 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.018 00:08:28.018 NVM Specific Namespace Data 00:08:28.018 =========================== 00:08:28.018 Logical Block Storage Tag Mask: 0 00:08:28.018 Protection Information Capabilities: 00:08:28.018 16b Guard Protection Information Storage Tag Support: No 00:08:28.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.018 Storage Tag Check Read Support: No 00:08:28.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Namespace ID:3 00:08:28.018 Error Recovery Timeout: Unlimited 00:08:28.018 Command Set Identifier: NVM (00h) 00:08:28.018 Deallocate: Supported 00:08:28.018 Deallocated/Unwritten Error: Supported 00:08:28.018 Deallocated Read Value: All 0x00 00:08:28.018 Deallocate in Write Zeroes: Not Supported 00:08:28.018 Deallocated Guard Field: 0xFFFF 00:08:28.018 Flush: Supported 00:08:28.018 Reservation: Not Supported 00:08:28.018 Namespace Sharing Capabilities: Private 00:08:28.018 Size (in LBAs): 1048576 (4GiB) 00:08:28.018 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.018 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.018 Thin Provisioning: Not Supported 00:08:28.018 Per-NS Atomic Units: No 00:08:28.018 Maximum Single Source Range Length: 128 00:08:28.018 Maximum Copy Length: 128 00:08:28.018 Maximum Source Range Count: 128 00:08:28.018 NGUID/EUI64 Never Reused: No 00:08:28.018 Namespace Write Protected: No 00:08:28.018 Number of LBA Formats: 8 00:08:28.018 Current LBA Format: LBA Format #04 00:08:28.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.018 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.018 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.018 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.018 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.018 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.018 00:08:28.018 NVM Specific Namespace Data 00:08:28.018 =========================== 00:08:28.018 Logical Block Storage Tag Mask: 0 00:08:28.018 Protection Information Capabilities: 00:08:28.018 16b Guard Protection Information Storage Tag Support: No 00:08:28.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.018 Storage Tag Check Read Support: No 00:08:28.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.018 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.018 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:28.276 ===================================================== 00:08:28.276 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.276 ===================================================== 00:08:28.276 Controller Capabilities/Features 00:08:28.276 ================================ 00:08:28.276 Vendor ID: 1b36 00:08:28.276 Subsystem Vendor ID: 1af4 00:08:28.276 Serial Number: 12340 00:08:28.276 Model Number: QEMU NVMe Ctrl 00:08:28.276 Firmware Version: 8.0.0 00:08:28.276 Recommended Arb Burst: 6 00:08:28.276 IEEE OUI Identifier: 00 54 52 00:08:28.276 Multi-path I/O 00:08:28.276 May have multiple subsystem ports: No 00:08:28.276 May have multiple controllers: No 00:08:28.276 Associated with SR-IOV VF: No 00:08:28.276 Max Data Transfer Size: 524288 00:08:28.276 Max Number of Namespaces: 256 00:08:28.276 Max Number of I/O Queues: 64 00:08:28.276 NVMe Specification Version (VS): 1.4 00:08:28.276 NVMe Specification Version (Identify): 1.4 00:08:28.276 Maximum Queue Entries: 2048 00:08:28.276 Contiguous Queues Required: Yes 00:08:28.276 Arbitration Mechanisms Supported 00:08:28.276 Weighted Round Robin: Not Supported 00:08:28.276 Vendor Specific: Not Supported 00:08:28.276 Reset Timeout: 7500 ms 00:08:28.276 Doorbell Stride: 4 bytes 00:08:28.276 NVM Subsystem Reset: Not Supported 00:08:28.276 Command Sets Supported 00:08:28.276 NVM Command Set: Supported 00:08:28.276 Boot Partition: Not Supported 00:08:28.276 Memory Page Size Minimum: 4096 bytes 00:08:28.276 Memory Page Size Maximum: 65536 bytes 00:08:28.276 Persistent Memory Region: Not Supported 00:08:28.276 Optional Asynchronous Events Supported 00:08:28.276 Namespace Attribute Notices: Supported 00:08:28.276 Firmware Activation Notices: Not Supported 00:08:28.276 ANA Change Notices: Not Supported 00:08:28.276 PLE Aggregate Log Change Notices: Not Supported 00:08:28.276 LBA Status Info Alert Notices: Not Supported 00:08:28.276 EGE Aggregate Log Change Notices: Not Supported 00:08:28.276 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.276 Zone Descriptor Change Notices: Not Supported 00:08:28.276 Discovery Log Change Notices: Not Supported 00:08:28.276 Controller Attributes 00:08:28.276 128-bit Host Identifier: Not Supported 00:08:28.276 Non-Operational Permissive Mode: Not Supported 00:08:28.276 NVM Sets: Not Supported 00:08:28.276 Read Recovery Levels: Not Supported 00:08:28.276 Endurance Groups: Not Supported 00:08:28.276 Predictable Latency Mode: Not Supported 00:08:28.276 Traffic Based Keep ALive: Not Supported 00:08:28.276 Namespace Granularity: Not Supported 00:08:28.276 SQ Associations: Not Supported 00:08:28.276 UUID List: Not Supported 00:08:28.276 Multi-Domain Subsystem: Not Supported 00:08:28.276 Fixed Capacity Management: Not Supported 00:08:28.276 Variable Capacity Management: Not Supported 00:08:28.276 Delete Endurance Group: Not Supported 00:08:28.276 Delete NVM Set: Not Supported 00:08:28.276 Extended LBA Formats Supported: Supported 00:08:28.276 Flexible Data Placement Supported: Not Supported 00:08:28.276 00:08:28.276 Controller Memory Buffer Support 00:08:28.276 ================================ 00:08:28.276 Supported: No 00:08:28.276 00:08:28.276 Persistent Memory Region Support 00:08:28.276 ================================ 00:08:28.276 Supported: No 00:08:28.276 00:08:28.276 Admin Command Set Attributes 00:08:28.276 ============================ 00:08:28.276 Security Send/Receive: Not Supported 00:08:28.276 Format NVM: Supported 00:08:28.276 Firmware Activate/Download: Not Supported 00:08:28.276 Namespace Management: Supported 00:08:28.276 Device Self-Test: Not Supported 00:08:28.276 Directives: Supported 00:08:28.276 NVMe-MI: Not Supported 00:08:28.276 Virtualization Management: Not Supported 00:08:28.276 Doorbell Buffer Config: Supported 00:08:28.276 Get LBA Status Capability: Not Supported 00:08:28.276 Command & Feature Lockdown Capability: Not Supported 00:08:28.276 Abort Command Limit: 4 00:08:28.276 Async Event Request Limit: 4 00:08:28.276 Number of Firmware Slots: N/A 00:08:28.276 Firmware Slot 1 Read-Only: N/A 00:08:28.276 Firmware Activation Without Reset: N/A 00:08:28.276 Multiple Update Detection Support: N/A 00:08:28.276 Firmware Update Granularity: No Information Provided 00:08:28.276 Per-Namespace SMART Log: Yes 00:08:28.276 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.276 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:28.276 Command Effects Log Page: Supported 00:08:28.276 Get Log Page Extended Data: Supported 00:08:28.276 Telemetry Log Pages: Not Supported 00:08:28.276 Persistent Event Log Pages: Not Supported 00:08:28.276 Supported Log Pages Log Page: May Support 00:08:28.277 Commands Supported & Effects Log Page: Not Supported 00:08:28.277 Feature Identifiers & Effects Log Page:May Support 00:08:28.277 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.277 Data Area 4 for Telemetry Log: Not Supported 00:08:28.277 Error Log Page Entries Supported: 1 00:08:28.277 Keep Alive: Not Supported 00:08:28.277 00:08:28.277 NVM Command Set Attributes 00:08:28.277 ========================== 00:08:28.277 Submission Queue Entry Size 00:08:28.277 Max: 64 00:08:28.277 Min: 64 00:08:28.277 Completion Queue Entry Size 00:08:28.277 Max: 16 00:08:28.277 Min: 16 00:08:28.277 Number of Namespaces: 256 00:08:28.277 Compare Command: Supported 00:08:28.277 Write Uncorrectable Command: Not Supported 00:08:28.277 Dataset Management Command: Supported 00:08:28.277 Write Zeroes Command: Supported 00:08:28.277 Set Features Save Field: Supported 00:08:28.277 Reservations: Not Supported 00:08:28.277 Timestamp: Supported 00:08:28.277 Copy: Supported 00:08:28.277 Volatile Write Cache: Present 00:08:28.277 Atomic Write Unit (Normal): 1 00:08:28.277 Atomic Write Unit (PFail): 1 00:08:28.277 Atomic Compare & Write Unit: 1 00:08:28.277 Fused Compare & Write: Not Supported 00:08:28.277 Scatter-Gather List 00:08:28.277 SGL Command Set: Supported 00:08:28.277 SGL Keyed: Not Supported 00:08:28.277 SGL Bit Bucket Descriptor: Not Supported 00:08:28.277 SGL Metadata Pointer: Not Supported 00:08:28.277 Oversized SGL: Not Supported 00:08:28.277 SGL Metadata Address: Not Supported 00:08:28.277 SGL Offset: Not Supported 00:08:28.277 Transport SGL Data Block: Not Supported 00:08:28.277 Replay Protected Memory Block: Not Supported 00:08:28.277 00:08:28.277 Firmware Slot Information 00:08:28.277 ========================= 00:08:28.277 Active slot: 1 00:08:28.277 Slot 1 Firmware Revision: 1.0 00:08:28.277 00:08:28.277 00:08:28.277 Commands Supported and Effects 00:08:28.277 ============================== 00:08:28.277 Admin Commands 00:08:28.277 -------------- 00:08:28.277 Delete I/O Submission Queue (00h): Supported 00:08:28.277 Create I/O Submission Queue (01h): Supported 00:08:28.277 Get Log Page (02h): Supported 00:08:28.277 Delete I/O Completion Queue (04h): Supported 00:08:28.277 Create I/O Completion Queue (05h): Supported 00:08:28.277 Identify (06h): Supported 00:08:28.277 Abort (08h): Supported 00:08:28.277 Set Features (09h): Supported 00:08:28.277 Get Features (0Ah): Supported 00:08:28.277 Asynchronous Event Request (0Ch): Supported 00:08:28.277 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.277 Directive Send (19h): Supported 00:08:28.277 Directive Receive (1Ah): Supported 00:08:28.277 Virtualization Management (1Ch): Supported 00:08:28.277 Doorbell Buffer Config (7Ch): Supported 00:08:28.277 Format NVM (80h): Supported LBA-Change 00:08:28.277 I/O Commands 00:08:28.277 ------------ 00:08:28.277 Flush (00h): Supported LBA-Change 00:08:28.277 Write (01h): Supported LBA-Change 00:08:28.277 Read (02h): Supported 00:08:28.277 Compare (05h): Supported 00:08:28.277 Write Zeroes (08h): Supported LBA-Change 00:08:28.277 Dataset Management (09h): Supported LBA-Change 00:08:28.277 Unknown (0Ch): Supported 00:08:28.277 Unknown (12h): Supported 00:08:28.277 Copy (19h): Supported LBA-Change 00:08:28.277 Unknown (1Dh): Supported LBA-Change 00:08:28.277 00:08:28.277 Error Log 00:08:28.277 ========= 00:08:28.277 00:08:28.277 Arbitration 00:08:28.277 =========== 00:08:28.277 Arbitration Burst: no limit 00:08:28.277 00:08:28.277 Power Management 00:08:28.277 ================ 00:08:28.277 Number of Power States: 1 00:08:28.277 Current Power State: Power State #0 00:08:28.277 Power State #0: 00:08:28.277 Max Power: 25.00 W 00:08:28.277 Non-Operational State: Operational 00:08:28.277 Entry Latency: 16 microseconds 00:08:28.277 Exit Latency: 4 microseconds 00:08:28.277 Relative Read Throughput: 0 00:08:28.277 Relative Read Latency: 0 00:08:28.277 Relative Write Throughput: 0 00:08:28.277 Relative Write Latency: 0 00:08:28.277 Idle Power: Not Reported 00:08:28.277 Active Power: Not Reported 00:08:28.277 Non-Operational Permissive Mode: Not Supported 00:08:28.277 00:08:28.277 Health Information 00:08:28.277 ================== 00:08:28.277 Critical Warnings: 00:08:28.277 Available Spare Space: OK 00:08:28.277 Temperature: OK 00:08:28.277 Device Reliability: OK 00:08:28.277 Read Only: No 00:08:28.277 Volatile Memory Backup: OK 00:08:28.277 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.277 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.277 Available Spare: 0% 00:08:28.277 Available Spare Threshold: 0% 00:08:28.277 Life Percentage Used: 0% 00:08:28.277 Data Units Read: 656 00:08:28.277 Data Units Written: 584 00:08:28.277 Host Read Commands: 36821 00:08:28.277 Host Write Commands: 36607 00:08:28.277 Controller Busy Time: 0 minutes 00:08:28.277 Power Cycles: 0 00:08:28.277 Power On Hours: 0 hours 00:08:28.277 Unsafe Shutdowns: 0 00:08:28.277 Unrecoverable Media Errors: 0 00:08:28.277 Lifetime Error Log Entries: 0 00:08:28.277 Warning Temperature Time: 0 minutes 00:08:28.277 Critical Temperature Time: 0 minutes 00:08:28.277 00:08:28.277 Number of Queues 00:08:28.277 ================ 00:08:28.277 Number of I/O Submission Queues: 64 00:08:28.277 Number of I/O Completion Queues: 64 00:08:28.277 00:08:28.277 ZNS Specific Controller Data 00:08:28.277 ============================ 00:08:28.277 Zone Append Size Limit: 0 00:08:28.277 00:08:28.277 00:08:28.277 Active Namespaces 00:08:28.277 ================= 00:08:28.277 Namespace ID:1 00:08:28.277 Error Recovery Timeout: Unlimited 00:08:28.277 Command Set Identifier: NVM (00h) 00:08:28.277 Deallocate: Supported 00:08:28.277 Deallocated/Unwritten Error: Supported 00:08:28.277 Deallocated Read Value: All 0x00 00:08:28.277 Deallocate in Write Zeroes: Not Supported 00:08:28.277 Deallocated Guard Field: 0xFFFF 00:08:28.277 Flush: Supported 00:08:28.277 Reservation: Not Supported 00:08:28.277 Metadata Transferred as: Separate Metadata Buffer 00:08:28.277 Namespace Sharing Capabilities: Private 00:08:28.277 Size (in LBAs): 1548666 (5GiB) 00:08:28.277 Capacity (in LBAs): 1548666 (5GiB) 00:08:28.277 Utilization (in LBAs): 1548666 (5GiB) 00:08:28.277 Thin Provisioning: Not Supported 00:08:28.277 Per-NS Atomic Units: No 00:08:28.277 Maximum Single Source Range Length: 128 00:08:28.277 Maximum Copy Length: 128 00:08:28.277 Maximum Source Range Count: 128 00:08:28.277 NGUID/EUI64 Never Reused: No 00:08:28.277 Namespace Write Protected: No 00:08:28.277 Number of LBA Formats: 8 00:08:28.277 Current LBA Format: LBA Format #07 00:08:28.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.277 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.277 00:08:28.277 NVM Specific Namespace Data 00:08:28.277 =========================== 00:08:28.277 Logical Block Storage Tag Mask: 0 00:08:28.277 Protection Information Capabilities: 00:08:28.277 16b Guard Protection Information Storage Tag Support: No 00:08:28.277 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.277 Storage Tag Check Read Support: No 00:08:28.277 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.278 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.278 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:28.535 ===================================================== 00:08:28.535 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.535 ===================================================== 00:08:28.535 Controller Capabilities/Features 00:08:28.535 ================================ 00:08:28.535 Vendor ID: 1b36 00:08:28.535 Subsystem Vendor ID: 1af4 00:08:28.535 Serial Number: 12341 00:08:28.535 Model Number: QEMU NVMe Ctrl 00:08:28.535 Firmware Version: 8.0.0 00:08:28.535 Recommended Arb Burst: 6 00:08:28.535 IEEE OUI Identifier: 00 54 52 00:08:28.535 Multi-path I/O 00:08:28.535 May have multiple subsystem ports: No 00:08:28.535 May have multiple controllers: No 00:08:28.535 Associated with SR-IOV VF: No 00:08:28.535 Max Data Transfer Size: 524288 00:08:28.535 Max Number of Namespaces: 256 00:08:28.535 Max Number of I/O Queues: 64 00:08:28.535 NVMe Specification Version (VS): 1.4 00:08:28.535 NVMe Specification Version (Identify): 1.4 00:08:28.535 Maximum Queue Entries: 2048 00:08:28.535 Contiguous Queues Required: Yes 00:08:28.535 Arbitration Mechanisms Supported 00:08:28.535 Weighted Round Robin: Not Supported 00:08:28.535 Vendor Specific: Not Supported 00:08:28.535 Reset Timeout: 7500 ms 00:08:28.535 Doorbell Stride: 4 bytes 00:08:28.535 NVM Subsystem Reset: Not Supported 00:08:28.535 Command Sets Supported 00:08:28.535 NVM Command Set: Supported 00:08:28.535 Boot Partition: Not Supported 00:08:28.535 Memory Page Size Minimum: 4096 bytes 00:08:28.536 Memory Page Size Maximum: 65536 bytes 00:08:28.536 Persistent Memory Region: Not Supported 00:08:28.536 Optional Asynchronous Events Supported 00:08:28.536 Namespace Attribute Notices: Supported 00:08:28.536 Firmware Activation Notices: Not Supported 00:08:28.536 ANA Change Notices: Not Supported 00:08:28.536 PLE Aggregate Log Change Notices: Not Supported 00:08:28.536 LBA Status Info Alert Notices: Not Supported 00:08:28.536 EGE Aggregate Log Change Notices: Not Supported 00:08:28.536 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.536 Zone Descriptor Change Notices: Not Supported 00:08:28.536 Discovery Log Change Notices: Not Supported 00:08:28.536 Controller Attributes 00:08:28.536 128-bit Host Identifier: Not Supported 00:08:28.536 Non-Operational Permissive Mode: Not Supported 00:08:28.536 NVM Sets: Not Supported 00:08:28.536 Read Recovery Levels: Not Supported 00:08:28.536 Endurance Groups: Not Supported 00:08:28.536 Predictable Latency Mode: Not Supported 00:08:28.536 Traffic Based Keep ALive: Not Supported 00:08:28.536 Namespace Granularity: Not Supported 00:08:28.536 SQ Associations: Not Supported 00:08:28.536 UUID List: Not Supported 00:08:28.536 Multi-Domain Subsystem: Not Supported 00:08:28.536 Fixed Capacity Management: Not Supported 00:08:28.536 Variable Capacity Management: Not Supported 00:08:28.536 Delete Endurance Group: Not Supported 00:08:28.536 Delete NVM Set: Not Supported 00:08:28.536 Extended LBA Formats Supported: Supported 00:08:28.536 Flexible Data Placement Supported: Not Supported 00:08:28.536 00:08:28.536 Controller Memory Buffer Support 00:08:28.536 ================================ 00:08:28.536 Supported: No 00:08:28.536 00:08:28.536 Persistent Memory Region Support 00:08:28.536 ================================ 00:08:28.536 Supported: No 00:08:28.536 00:08:28.536 Admin Command Set Attributes 00:08:28.536 ============================ 00:08:28.536 Security Send/Receive: Not Supported 00:08:28.536 Format NVM: Supported 00:08:28.536 Firmware Activate/Download: Not Supported 00:08:28.536 Namespace Management: Supported 00:08:28.536 Device Self-Test: Not Supported 00:08:28.536 Directives: Supported 00:08:28.536 NVMe-MI: Not Supported 00:08:28.536 Virtualization Management: Not Supported 00:08:28.536 Doorbell Buffer Config: Supported 00:08:28.536 Get LBA Status Capability: Not Supported 00:08:28.536 Command & Feature Lockdown Capability: Not Supported 00:08:28.536 Abort Command Limit: 4 00:08:28.536 Async Event Request Limit: 4 00:08:28.536 Number of Firmware Slots: N/A 00:08:28.536 Firmware Slot 1 Read-Only: N/A 00:08:28.536 Firmware Activation Without Reset: N/A 00:08:28.536 Multiple Update Detection Support: N/A 00:08:28.536 Firmware Update Granularity: No Information Provided 00:08:28.536 Per-Namespace SMART Log: Yes 00:08:28.536 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.536 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:28.536 Command Effects Log Page: Supported 00:08:28.536 Get Log Page Extended Data: Supported 00:08:28.536 Telemetry Log Pages: Not Supported 00:08:28.536 Persistent Event Log Pages: Not Supported 00:08:28.536 Supported Log Pages Log Page: May Support 00:08:28.536 Commands Supported & Effects Log Page: Not Supported 00:08:28.536 Feature Identifiers & Effects Log Page:May Support 00:08:28.536 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.536 Data Area 4 for Telemetry Log: Not Supported 00:08:28.536 Error Log Page Entries Supported: 1 00:08:28.536 Keep Alive: Not Supported 00:08:28.536 00:08:28.536 NVM Command Set Attributes 00:08:28.536 ========================== 00:08:28.536 Submission Queue Entry Size 00:08:28.536 Max: 64 00:08:28.536 Min: 64 00:08:28.536 Completion Queue Entry Size 00:08:28.536 Max: 16 00:08:28.536 Min: 16 00:08:28.536 Number of Namespaces: 256 00:08:28.536 Compare Command: Supported 00:08:28.536 Write Uncorrectable Command: Not Supported 00:08:28.536 Dataset Management Command: Supported 00:08:28.536 Write Zeroes Command: Supported 00:08:28.536 Set Features Save Field: Supported 00:08:28.536 Reservations: Not Supported 00:08:28.536 Timestamp: Supported 00:08:28.536 Copy: Supported 00:08:28.536 Volatile Write Cache: Present 00:08:28.536 Atomic Write Unit (Normal): 1 00:08:28.536 Atomic Write Unit (PFail): 1 00:08:28.536 Atomic Compare & Write Unit: 1 00:08:28.536 Fused Compare & Write: Not Supported 00:08:28.536 Scatter-Gather List 00:08:28.536 SGL Command Set: Supported 00:08:28.536 SGL Keyed: Not Supported 00:08:28.536 SGL Bit Bucket Descriptor: Not Supported 00:08:28.536 SGL Metadata Pointer: Not Supported 00:08:28.536 Oversized SGL: Not Supported 00:08:28.536 SGL Metadata Address: Not Supported 00:08:28.536 SGL Offset: Not Supported 00:08:28.536 Transport SGL Data Block: Not Supported 00:08:28.536 Replay Protected Memory Block: Not Supported 00:08:28.536 00:08:28.536 Firmware Slot Information 00:08:28.536 ========================= 00:08:28.536 Active slot: 1 00:08:28.536 Slot 1 Firmware Revision: 1.0 00:08:28.536 00:08:28.536 00:08:28.536 Commands Supported and Effects 00:08:28.536 ============================== 00:08:28.536 Admin Commands 00:08:28.536 -------------- 00:08:28.536 Delete I/O Submission Queue (00h): Supported 00:08:28.536 Create I/O Submission Queue (01h): Supported 00:08:28.536 Get Log Page (02h): Supported 00:08:28.536 Delete I/O Completion Queue (04h): Supported 00:08:28.536 Create I/O Completion Queue (05h): Supported 00:08:28.536 Identify (06h): Supported 00:08:28.536 Abort (08h): Supported 00:08:28.536 Set Features (09h): Supported 00:08:28.536 Get Features (0Ah): Supported 00:08:28.536 Asynchronous Event Request (0Ch): Supported 00:08:28.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.536 Directive Send (19h): Supported 00:08:28.536 Directive Receive (1Ah): Supported 00:08:28.536 Virtualization Management (1Ch): Supported 00:08:28.536 Doorbell Buffer Config (7Ch): Supported 00:08:28.536 Format NVM (80h): Supported LBA-Change 00:08:28.536 I/O Commands 00:08:28.536 ------------ 00:08:28.536 Flush (00h): Supported LBA-Change 00:08:28.536 Write (01h): Supported LBA-Change 00:08:28.536 Read (02h): Supported 00:08:28.536 Compare (05h): Supported 00:08:28.536 Write Zeroes (08h): Supported LBA-Change 00:08:28.536 Dataset Management (09h): Supported LBA-Change 00:08:28.536 Unknown (0Ch): Supported 00:08:28.536 Unknown (12h): Supported 00:08:28.536 Copy (19h): Supported LBA-Change 00:08:28.536 Unknown (1Dh): Supported LBA-Change 00:08:28.536 00:08:28.536 Error Log 00:08:28.536 ========= 00:08:28.536 00:08:28.536 Arbitration 00:08:28.536 =========== 00:08:28.536 Arbitration Burst: no limit 00:08:28.536 00:08:28.536 Power Management 00:08:28.536 ================ 00:08:28.536 Number of Power States: 1 00:08:28.536 Current Power State: Power State #0 00:08:28.536 Power State #0: 00:08:28.536 Max Power: 25.00 W 00:08:28.536 Non-Operational State: Operational 00:08:28.536 Entry Latency: 16 microseconds 00:08:28.536 Exit Latency: 4 microseconds 00:08:28.536 Relative Read Throughput: 0 00:08:28.536 Relative Read Latency: 0 00:08:28.536 Relative Write Throughput: 0 00:08:28.536 Relative Write Latency: 0 00:08:28.536 Idle Power: Not Reported 00:08:28.536 Active Power: Not Reported 00:08:28.536 Non-Operational Permissive Mode: Not Supported 00:08:28.536 00:08:28.536 Health Information 00:08:28.536 ================== 00:08:28.536 Critical Warnings: 00:08:28.536 Available Spare Space: OK 00:08:28.536 Temperature: OK 00:08:28.536 Device Reliability: OK 00:08:28.537 Read Only: No 00:08:28.537 Volatile Memory Backup: OK 00:08:28.537 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.537 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.537 Available Spare: 0% 00:08:28.537 Available Spare Threshold: 0% 00:08:28.537 Life Percentage Used: 0% 00:08:28.537 Data Units Read: 993 00:08:28.537 Data Units Written: 866 00:08:28.537 Host Read Commands: 52835 00:08:28.537 Host Write Commands: 51719 00:08:28.537 Controller Busy Time: 0 minutes 00:08:28.537 Power Cycles: 0 00:08:28.537 Power On Hours: 0 hours 00:08:28.537 Unsafe Shutdowns: 0 00:08:28.537 Unrecoverable Media Errors: 0 00:08:28.537 Lifetime Error Log Entries: 0 00:08:28.537 Warning Temperature Time: 0 minutes 00:08:28.537 Critical Temperature Time: 0 minutes 00:08:28.537 00:08:28.537 Number of Queues 00:08:28.537 ================ 00:08:28.537 Number of I/O Submission Queues: 64 00:08:28.537 Number of I/O Completion Queues: 64 00:08:28.537 00:08:28.537 ZNS Specific Controller Data 00:08:28.537 ============================ 00:08:28.537 Zone Append Size Limit: 0 00:08:28.537 00:08:28.537 00:08:28.537 Active Namespaces 00:08:28.537 ================= 00:08:28.537 Namespace ID:1 00:08:28.537 Error Recovery Timeout: Unlimited 00:08:28.537 Command Set Identifier: NVM (00h) 00:08:28.537 Deallocate: Supported 00:08:28.537 Deallocated/Unwritten Error: Supported 00:08:28.537 Deallocated Read Value: All 0x00 00:08:28.537 Deallocate in Write Zeroes: Not Supported 00:08:28.537 Deallocated Guard Field: 0xFFFF 00:08:28.537 Flush: Supported 00:08:28.537 Reservation: Not Supported 00:08:28.537 Namespace Sharing Capabilities: Private 00:08:28.537 Size (in LBAs): 1310720 (5GiB) 00:08:28.537 Capacity (in LBAs): 1310720 (5GiB) 00:08:28.537 Utilization (in LBAs): 1310720 (5GiB) 00:08:28.537 Thin Provisioning: Not Supported 00:08:28.537 Per-NS Atomic Units: No 00:08:28.537 Maximum Single Source Range Length: 128 00:08:28.537 Maximum Copy Length: 128 00:08:28.537 Maximum Source Range Count: 128 00:08:28.537 NGUID/EUI64 Never Reused: No 00:08:28.537 Namespace Write Protected: No 00:08:28.537 Number of LBA Formats: 8 00:08:28.537 Current LBA Format: LBA Format #04 00:08:28.537 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.537 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.537 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.537 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.537 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.537 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.537 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.537 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.537 00:08:28.537 NVM Specific Namespace Data 00:08:28.537 =========================== 00:08:28.537 Logical Block Storage Tag Mask: 0 00:08:28.537 Protection Information Capabilities: 00:08:28.537 16b Guard Protection Information Storage Tag Support: No 00:08:28.537 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.537 Storage Tag Check Read Support: No 00:08:28.537 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.537 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.537 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:28.794 ===================================================== 00:08:28.794 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.794 ===================================================== 00:08:28.794 Controller Capabilities/Features 00:08:28.794 ================================ 00:08:28.794 Vendor ID: 1b36 00:08:28.794 Subsystem Vendor ID: 1af4 00:08:28.794 Serial Number: 12342 00:08:28.794 Model Number: QEMU NVMe Ctrl 00:08:28.794 Firmware Version: 8.0.0 00:08:28.794 Recommended Arb Burst: 6 00:08:28.794 IEEE OUI Identifier: 00 54 52 00:08:28.794 Multi-path I/O 00:08:28.794 May have multiple subsystem ports: No 00:08:28.794 May have multiple controllers: No 00:08:28.794 Associated with SR-IOV VF: No 00:08:28.794 Max Data Transfer Size: 524288 00:08:28.794 Max Number of Namespaces: 256 00:08:28.794 Max Number of I/O Queues: 64 00:08:28.794 NVMe Specification Version (VS): 1.4 00:08:28.794 NVMe Specification Version (Identify): 1.4 00:08:28.794 Maximum Queue Entries: 2048 00:08:28.794 Contiguous Queues Required: Yes 00:08:28.794 Arbitration Mechanisms Supported 00:08:28.794 Weighted Round Robin: Not Supported 00:08:28.794 Vendor Specific: Not Supported 00:08:28.794 Reset Timeout: 7500 ms 00:08:28.794 Doorbell Stride: 4 bytes 00:08:28.794 NVM Subsystem Reset: Not Supported 00:08:28.794 Command Sets Supported 00:08:28.794 NVM Command Set: Supported 00:08:28.794 Boot Partition: Not Supported 00:08:28.794 Memory Page Size Minimum: 4096 bytes 00:08:28.794 Memory Page Size Maximum: 65536 bytes 00:08:28.794 Persistent Memory Region: Not Supported 00:08:28.794 Optional Asynchronous Events Supported 00:08:28.794 Namespace Attribute Notices: Supported 00:08:28.794 Firmware Activation Notices: Not Supported 00:08:28.794 ANA Change Notices: Not Supported 00:08:28.794 PLE Aggregate Log Change Notices: Not Supported 00:08:28.794 LBA Status Info Alert Notices: Not Supported 00:08:28.794 EGE Aggregate Log Change Notices: Not Supported 00:08:28.794 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.794 Zone Descriptor Change Notices: Not Supported 00:08:28.794 Discovery Log Change Notices: Not Supported 00:08:28.794 Controller Attributes 00:08:28.795 128-bit Host Identifier: Not Supported 00:08:28.795 Non-Operational Permissive Mode: Not Supported 00:08:28.795 NVM Sets: Not Supported 00:08:28.795 Read Recovery Levels: Not Supported 00:08:28.795 Endurance Groups: Not Supported 00:08:28.795 Predictable Latency Mode: Not Supported 00:08:28.795 Traffic Based Keep ALive: Not Supported 00:08:28.795 Namespace Granularity: Not Supported 00:08:28.795 SQ Associations: Not Supported 00:08:28.795 UUID List: Not Supported 00:08:28.795 Multi-Domain Subsystem: Not Supported 00:08:28.795 Fixed Capacity Management: Not Supported 00:08:28.795 Variable Capacity Management: Not Supported 00:08:28.795 Delete Endurance Group: Not Supported 00:08:28.795 Delete NVM Set: Not Supported 00:08:28.795 Extended LBA Formats Supported: Supported 00:08:28.795 Flexible Data Placement Supported: Not Supported 00:08:28.795 00:08:28.795 Controller Memory Buffer Support 00:08:28.795 ================================ 00:08:28.795 Supported: No 00:08:28.795 00:08:28.795 Persistent Memory Region Support 00:08:28.795 ================================ 00:08:28.795 Supported: No 00:08:28.795 00:08:28.795 Admin Command Set Attributes 00:08:28.795 ============================ 00:08:28.795 Security Send/Receive: Not Supported 00:08:28.795 Format NVM: Supported 00:08:28.795 Firmware Activate/Download: Not Supported 00:08:28.795 Namespace Management: Supported 00:08:28.795 Device Self-Test: Not Supported 00:08:28.795 Directives: Supported 00:08:28.795 NVMe-MI: Not Supported 00:08:28.795 Virtualization Management: Not Supported 00:08:28.795 Doorbell Buffer Config: Supported 00:08:28.795 Get LBA Status Capability: Not Supported 00:08:28.795 Command & Feature Lockdown Capability: Not Supported 00:08:28.795 Abort Command Limit: 4 00:08:28.795 Async Event Request Limit: 4 00:08:28.795 Number of Firmware Slots: N/A 00:08:28.795 Firmware Slot 1 Read-Only: N/A 00:08:28.795 Firmware Activation Without Reset: N/A 00:08:28.795 Multiple Update Detection Support: N/A 00:08:28.795 Firmware Update Granularity: No Information Provided 00:08:28.795 Per-Namespace SMART Log: Yes 00:08:28.795 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.795 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:28.795 Command Effects Log Page: Supported 00:08:28.795 Get Log Page Extended Data: Supported 00:08:28.795 Telemetry Log Pages: Not Supported 00:08:28.795 Persistent Event Log Pages: Not Supported 00:08:28.795 Supported Log Pages Log Page: May Support 00:08:28.795 Commands Supported & Effects Log Page: Not Supported 00:08:28.795 Feature Identifiers & Effects Log Page:May Support 00:08:28.795 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.795 Data Area 4 for Telemetry Log: Not Supported 00:08:28.795 Error Log Page Entries Supported: 1 00:08:28.795 Keep Alive: Not Supported 00:08:28.795 00:08:28.795 NVM Command Set Attributes 00:08:28.795 ========================== 00:08:28.795 Submission Queue Entry Size 00:08:28.795 Max: 64 00:08:28.795 Min: 64 00:08:28.795 Completion Queue Entry Size 00:08:28.795 Max: 16 00:08:28.795 Min: 16 00:08:28.795 Number of Namespaces: 256 00:08:28.795 Compare Command: Supported 00:08:28.795 Write Uncorrectable Command: Not Supported 00:08:28.795 Dataset Management Command: Supported 00:08:28.795 Write Zeroes Command: Supported 00:08:28.795 Set Features Save Field: Supported 00:08:28.795 Reservations: Not Supported 00:08:28.795 Timestamp: Supported 00:08:28.795 Copy: Supported 00:08:28.795 Volatile Write Cache: Present 00:08:28.795 Atomic Write Unit (Normal): 1 00:08:28.795 Atomic Write Unit (PFail): 1 00:08:28.795 Atomic Compare & Write Unit: 1 00:08:28.795 Fused Compare & Write: Not Supported 00:08:28.795 Scatter-Gather List 00:08:28.795 SGL Command Set: Supported 00:08:28.795 SGL Keyed: Not Supported 00:08:28.795 SGL Bit Bucket Descriptor: Not Supported 00:08:28.795 SGL Metadata Pointer: Not Supported 00:08:28.795 Oversized SGL: Not Supported 00:08:28.795 SGL Metadata Address: Not Supported 00:08:28.795 SGL Offset: Not Supported 00:08:28.795 Transport SGL Data Block: Not Supported 00:08:28.795 Replay Protected Memory Block: Not Supported 00:08:28.795 00:08:28.795 Firmware Slot Information 00:08:28.795 ========================= 00:08:28.795 Active slot: 1 00:08:28.795 Slot 1 Firmware Revision: 1.0 00:08:28.795 00:08:28.795 00:08:28.795 Commands Supported and Effects 00:08:28.795 ============================== 00:08:28.795 Admin Commands 00:08:28.795 -------------- 00:08:28.795 Delete I/O Submission Queue (00h): Supported 00:08:28.795 Create I/O Submission Queue (01h): Supported 00:08:28.795 Get Log Page (02h): Supported 00:08:28.795 Delete I/O Completion Queue (04h): Supported 00:08:28.795 Create I/O Completion Queue (05h): Supported 00:08:28.795 Identify (06h): Supported 00:08:28.795 Abort (08h): Supported 00:08:28.795 Set Features (09h): Supported 00:08:28.795 Get Features (0Ah): Supported 00:08:28.795 Asynchronous Event Request (0Ch): Supported 00:08:28.795 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.795 Directive Send (19h): Supported 00:08:28.795 Directive Receive (1Ah): Supported 00:08:28.795 Virtualization Management (1Ch): Supported 00:08:28.795 Doorbell Buffer Config (7Ch): Supported 00:08:28.795 Format NVM (80h): Supported LBA-Change 00:08:28.795 I/O Commands 00:08:28.795 ------------ 00:08:28.795 Flush (00h): Supported LBA-Change 00:08:28.795 Write (01h): Supported LBA-Change 00:08:28.795 Read (02h): Supported 00:08:28.795 Compare (05h): Supported 00:08:28.795 Write Zeroes (08h): Supported LBA-Change 00:08:28.795 Dataset Management (09h): Supported LBA-Change 00:08:28.795 Unknown (0Ch): Supported 00:08:28.795 Unknown (12h): Supported 00:08:28.795 Copy (19h): Supported LBA-Change 00:08:28.795 Unknown (1Dh): Supported LBA-Change 00:08:28.795 00:08:28.795 Error Log 00:08:28.795 ========= 00:08:28.795 00:08:28.795 Arbitration 00:08:28.795 =========== 00:08:28.795 Arbitration Burst: no limit 00:08:28.795 00:08:28.795 Power Management 00:08:28.795 ================ 00:08:28.795 Number of Power States: 1 00:08:28.795 Current Power State: Power State #0 00:08:28.795 Power State #0: 00:08:28.795 Max Power: 25.00 W 00:08:28.795 Non-Operational State: Operational 00:08:28.795 Entry Latency: 16 microseconds 00:08:28.795 Exit Latency: 4 microseconds 00:08:28.795 Relative Read Throughput: 0 00:08:28.795 Relative Read Latency: 0 00:08:28.795 Relative Write Throughput: 0 00:08:28.795 Relative Write Latency: 0 00:08:28.795 Idle Power: Not Reported 00:08:28.795 Active Power: Not Reported 00:08:28.795 Non-Operational Permissive Mode: Not Supported 00:08:28.795 00:08:28.795 Health Information 00:08:28.795 ================== 00:08:28.795 Critical Warnings: 00:08:28.795 Available Spare Space: OK 00:08:28.795 Temperature: OK 00:08:28.795 Device Reliability: OK 00:08:28.795 Read Only: No 00:08:28.795 Volatile Memory Backup: OK 00:08:28.795 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.795 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.795 Available Spare: 0% 00:08:28.795 Available Spare Threshold: 0% 00:08:28.796 Life Percentage Used: 0% 00:08:28.796 Data Units Read: 2103 00:08:28.796 Data Units Written: 1890 00:08:28.796 Host Read Commands: 112371 00:08:28.796 Host Write Commands: 110640 00:08:28.796 Controller Busy Time: 0 minutes 00:08:28.796 Power Cycles: 0 00:08:28.796 Power On Hours: 0 hours 00:08:28.796 Unsafe Shutdowns: 0 00:08:28.796 Unrecoverable Media Errors: 0 00:08:28.796 Lifetime Error Log Entries: 0 00:08:28.796 Warning Temperature Time: 0 minutes 00:08:28.796 Critical Temperature Time: 0 minutes 00:08:28.796 00:08:28.796 Number of Queues 00:08:28.796 ================ 00:08:28.796 Number of I/O Submission Queues: 64 00:08:28.796 Number of I/O Completion Queues: 64 00:08:28.796 00:08:28.796 ZNS Specific Controller Data 00:08:28.796 ============================ 00:08:28.796 Zone Append Size Limit: 0 00:08:28.796 00:08:28.796 00:08:28.796 Active Namespaces 00:08:28.796 ================= 00:08:28.796 Namespace ID:1 00:08:28.796 Error Recovery Timeout: Unlimited 00:08:28.796 Command Set Identifier: NVM (00h) 00:08:28.796 Deallocate: Supported 00:08:28.796 Deallocated/Unwritten Error: Supported 00:08:28.796 Deallocated Read Value: All 0x00 00:08:28.796 Deallocate in Write Zeroes: Not Supported 00:08:28.796 Deallocated Guard Field: 0xFFFF 00:08:28.796 Flush: Supported 00:08:28.796 Reservation: Not Supported 00:08:28.796 Namespace Sharing Capabilities: Private 00:08:28.796 Size (in LBAs): 1048576 (4GiB) 00:08:28.796 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.796 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.796 Thin Provisioning: Not Supported 00:08:28.796 Per-NS Atomic Units: No 00:08:28.796 Maximum Single Source Range Length: 128 00:08:28.796 Maximum Copy Length: 128 00:08:28.796 Maximum Source Range Count: 128 00:08:28.796 NGUID/EUI64 Never Reused: No 00:08:28.796 Namespace Write Protected: No 00:08:28.796 Number of LBA Formats: 8 00:08:28.796 Current LBA Format: LBA Format #04 00:08:28.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.796 00:08:28.796 NVM Specific Namespace Data 00:08:28.796 =========================== 00:08:28.796 Logical Block Storage Tag Mask: 0 00:08:28.796 Protection Information Capabilities: 00:08:28.796 16b Guard Protection Information Storage Tag Support: No 00:08:28.796 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.796 Storage Tag Check Read Support: No 00:08:28.796 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Namespace ID:2 00:08:28.796 Error Recovery Timeout: Unlimited 00:08:28.796 Command Set Identifier: NVM (00h) 00:08:28.796 Deallocate: Supported 00:08:28.796 Deallocated/Unwritten Error: Supported 00:08:28.796 Deallocated Read Value: All 0x00 00:08:28.796 Deallocate in Write Zeroes: Not Supported 00:08:28.796 Deallocated Guard Field: 0xFFFF 00:08:28.796 Flush: Supported 00:08:28.796 Reservation: Not Supported 00:08:28.796 Namespace Sharing Capabilities: Private 00:08:28.796 Size (in LBAs): 1048576 (4GiB) 00:08:28.796 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.796 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.796 Thin Provisioning: Not Supported 00:08:28.796 Per-NS Atomic Units: No 00:08:28.796 Maximum Single Source Range Length: 128 00:08:28.796 Maximum Copy Length: 128 00:08:28.796 Maximum Source Range Count: 128 00:08:28.796 NGUID/EUI64 Never Reused: No 00:08:28.796 Namespace Write Protected: No 00:08:28.796 Number of LBA Formats: 8 00:08:28.796 Current LBA Format: LBA Format #04 00:08:28.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.796 00:08:28.796 NVM Specific Namespace Data 00:08:28.796 =========================== 00:08:28.796 Logical Block Storage Tag Mask: 0 00:08:28.796 Protection Information Capabilities: 00:08:28.796 16b Guard Protection Information Storage Tag Support: No 00:08:28.796 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.796 Storage Tag Check Read Support: No 00:08:28.796 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.796 Namespace ID:3 00:08:28.796 Error Recovery Timeout: Unlimited 00:08:28.796 Command Set Identifier: NVM (00h) 00:08:28.796 Deallocate: Supported 00:08:28.796 Deallocated/Unwritten Error: Supported 00:08:28.796 Deallocated Read Value: All 0x00 00:08:28.796 Deallocate in Write Zeroes: Not Supported 00:08:28.796 Deallocated Guard Field: 0xFFFF 00:08:28.796 Flush: Supported 00:08:28.796 Reservation: Not Supported 00:08:28.796 Namespace Sharing Capabilities: Private 00:08:28.796 Size (in LBAs): 1048576 (4GiB) 00:08:28.796 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.796 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.796 Thin Provisioning: Not Supported 00:08:28.796 Per-NS Atomic Units: No 00:08:28.796 Maximum Single Source Range Length: 128 00:08:28.796 Maximum Copy Length: 128 00:08:28.796 Maximum Source Range Count: 128 00:08:28.796 NGUID/EUI64 Never Reused: No 00:08:28.796 Namespace Write Protected: No 00:08:28.796 Number of LBA Formats: 8 00:08:28.796 Current LBA Format: LBA Format #04 00:08:28.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.797 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.797 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.797 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.797 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.797 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.797 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.797 00:08:28.797 NVM Specific Namespace Data 00:08:28.797 =========================== 00:08:28.797 Logical Block Storage Tag Mask: 0 00:08:28.797 Protection Information Capabilities: 00:08:28.797 16b Guard Protection Information Storage Tag Support: No 00:08:28.797 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.797 Storage Tag Check Read Support: No 00:08:28.797 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.797 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.797 15:56:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:29.055 ===================================================== 00:08:29.055 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.055 ===================================================== 00:08:29.055 Controller Capabilities/Features 00:08:29.055 ================================ 00:08:29.055 Vendor ID: 1b36 00:08:29.055 Subsystem Vendor ID: 1af4 00:08:29.055 Serial Number: 12343 00:08:29.055 Model Number: QEMU NVMe Ctrl 00:08:29.055 Firmware Version: 8.0.0 00:08:29.055 Recommended Arb Burst: 6 00:08:29.055 IEEE OUI Identifier: 00 54 52 00:08:29.055 Multi-path I/O 00:08:29.055 May have multiple subsystem ports: No 00:08:29.055 May have multiple controllers: Yes 00:08:29.055 Associated with SR-IOV VF: No 00:08:29.055 Max Data Transfer Size: 524288 00:08:29.055 Max Number of Namespaces: 256 00:08:29.055 Max Number of I/O Queues: 64 00:08:29.055 NVMe Specification Version (VS): 1.4 00:08:29.055 NVMe Specification Version (Identify): 1.4 00:08:29.055 Maximum Queue Entries: 2048 00:08:29.055 Contiguous Queues Required: Yes 00:08:29.055 Arbitration Mechanisms Supported 00:08:29.055 Weighted Round Robin: Not Supported 00:08:29.055 Vendor Specific: Not Supported 00:08:29.055 Reset Timeout: 7500 ms 00:08:29.055 Doorbell Stride: 4 bytes 00:08:29.055 NVM Subsystem Reset: Not Supported 00:08:29.055 Command Sets Supported 00:08:29.055 NVM Command Set: Supported 00:08:29.055 Boot Partition: Not Supported 00:08:29.055 Memory Page Size Minimum: 4096 bytes 00:08:29.055 Memory Page Size Maximum: 65536 bytes 00:08:29.055 Persistent Memory Region: Not Supported 00:08:29.055 Optional Asynchronous Events Supported 00:08:29.055 Namespace Attribute Notices: Supported 00:08:29.055 Firmware Activation Notices: Not Supported 00:08:29.055 ANA Change Notices: Not Supported 00:08:29.055 PLE Aggregate Log Change Notices: Not Supported 00:08:29.055 LBA Status Info Alert Notices: Not Supported 00:08:29.055 EGE Aggregate Log Change Notices: Not Supported 00:08:29.055 Normal NVM Subsystem Shutdown event: Not Supported 00:08:29.055 Zone Descriptor Change Notices: Not Supported 00:08:29.055 Discovery Log Change Notices: Not Supported 00:08:29.055 Controller Attributes 00:08:29.055 128-bit Host Identifier: Not Supported 00:08:29.055 Non-Operational Permissive Mode: Not Supported 00:08:29.055 NVM Sets: Not Supported 00:08:29.055 Read Recovery Levels: Not Supported 00:08:29.055 Endurance Groups: Supported 00:08:29.055 Predictable Latency Mode: Not Supported 00:08:29.055 Traffic Based Keep ALive: Not Supported 00:08:29.055 Namespace Granularity: Not Supported 00:08:29.055 SQ Associations: Not Supported 00:08:29.055 UUID List: Not Supported 00:08:29.055 Multi-Domain Subsystem: Not Supported 00:08:29.055 Fixed Capacity Management: Not Supported 00:08:29.055 Variable Capacity Management: Not Supported 00:08:29.055 Delete Endurance Group: Not Supported 00:08:29.055 Delete NVM Set: Not Supported 00:08:29.055 Extended LBA Formats Supported: Supported 00:08:29.055 Flexible Data Placement Supported: Supported 00:08:29.055 00:08:29.055 Controller Memory Buffer Support 00:08:29.055 ================================ 00:08:29.055 Supported: No 00:08:29.055 00:08:29.055 Persistent Memory Region Support 00:08:29.055 ================================ 00:08:29.055 Supported: No 00:08:29.055 00:08:29.055 Admin Command Set Attributes 00:08:29.055 ============================ 00:08:29.055 Security Send/Receive: Not Supported 00:08:29.055 Format NVM: Supported 00:08:29.055 Firmware Activate/Download: Not Supported 00:08:29.055 Namespace Management: Supported 00:08:29.055 Device Self-Test: Not Supported 00:08:29.055 Directives: Supported 00:08:29.055 NVMe-MI: Not Supported 00:08:29.055 Virtualization Management: Not Supported 00:08:29.055 Doorbell Buffer Config: Supported 00:08:29.055 Get LBA Status Capability: Not Supported 00:08:29.055 Command & Feature Lockdown Capability: Not Supported 00:08:29.055 Abort Command Limit: 4 00:08:29.055 Async Event Request Limit: 4 00:08:29.055 Number of Firmware Slots: N/A 00:08:29.055 Firmware Slot 1 Read-Only: N/A 00:08:29.055 Firmware Activation Without Reset: N/A 00:08:29.055 Multiple Update Detection Support: N/A 00:08:29.055 Firmware Update Granularity: No Information Provided 00:08:29.055 Per-Namespace SMART Log: Yes 00:08:29.055 Asymmetric Namespace Access Log Page: Not Supported 00:08:29.055 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:29.055 Command Effects Log Page: Supported 00:08:29.055 Get Log Page Extended Data: Supported 00:08:29.055 Telemetry Log Pages: Not Supported 00:08:29.055 Persistent Event Log Pages: Not Supported 00:08:29.055 Supported Log Pages Log Page: May Support 00:08:29.055 Commands Supported & Effects Log Page: Not Supported 00:08:29.055 Feature Identifiers & Effects Log Page:May Support 00:08:29.055 NVMe-MI Commands & Effects Log Page: May Support 00:08:29.055 Data Area 4 for Telemetry Log: Not Supported 00:08:29.055 Error Log Page Entries Supported: 1 00:08:29.055 Keep Alive: Not Supported 00:08:29.055 00:08:29.055 NVM Command Set Attributes 00:08:29.055 ========================== 00:08:29.055 Submission Queue Entry Size 00:08:29.055 Max: 64 00:08:29.055 Min: 64 00:08:29.055 Completion Queue Entry Size 00:08:29.055 Max: 16 00:08:29.055 Min: 16 00:08:29.055 Number of Namespaces: 256 00:08:29.055 Compare Command: Supported 00:08:29.055 Write Uncorrectable Command: Not Supported 00:08:29.055 Dataset Management Command: Supported 00:08:29.055 Write Zeroes Command: Supported 00:08:29.055 Set Features Save Field: Supported 00:08:29.055 Reservations: Not Supported 00:08:29.055 Timestamp: Supported 00:08:29.055 Copy: Supported 00:08:29.055 Volatile Write Cache: Present 00:08:29.055 Atomic Write Unit (Normal): 1 00:08:29.055 Atomic Write Unit (PFail): 1 00:08:29.055 Atomic Compare & Write Unit: 1 00:08:29.055 Fused Compare & Write: Not Supported 00:08:29.055 Scatter-Gather List 00:08:29.055 SGL Command Set: Supported 00:08:29.055 SGL Keyed: Not Supported 00:08:29.055 SGL Bit Bucket Descriptor: Not Supported 00:08:29.055 SGL Metadata Pointer: Not Supported 00:08:29.055 Oversized SGL: Not Supported 00:08:29.055 SGL Metadata Address: Not Supported 00:08:29.055 SGL Offset: Not Supported 00:08:29.055 Transport SGL Data Block: Not Supported 00:08:29.055 Replay Protected Memory Block: Not Supported 00:08:29.055 00:08:29.055 Firmware Slot Information 00:08:29.055 ========================= 00:08:29.055 Active slot: 1 00:08:29.055 Slot 1 Firmware Revision: 1.0 00:08:29.055 00:08:29.055 00:08:29.055 Commands Supported and Effects 00:08:29.055 ============================== 00:08:29.055 Admin Commands 00:08:29.055 -------------- 00:08:29.055 Delete I/O Submission Queue (00h): Supported 00:08:29.055 Create I/O Submission Queue (01h): Supported 00:08:29.055 Get Log Page (02h): Supported 00:08:29.055 Delete I/O Completion Queue (04h): Supported 00:08:29.055 Create I/O Completion Queue (05h): Supported 00:08:29.055 Identify (06h): Supported 00:08:29.055 Abort (08h): Supported 00:08:29.055 Set Features (09h): Supported 00:08:29.055 Get Features (0Ah): Supported 00:08:29.056 Asynchronous Event Request (0Ch): Supported 00:08:29.056 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:29.056 Directive Send (19h): Supported 00:08:29.056 Directive Receive (1Ah): Supported 00:08:29.056 Virtualization Management (1Ch): Supported 00:08:29.056 Doorbell Buffer Config (7Ch): Supported 00:08:29.056 Format NVM (80h): Supported LBA-Change 00:08:29.056 I/O Commands 00:08:29.056 ------------ 00:08:29.056 Flush (00h): Supported LBA-Change 00:08:29.056 Write (01h): Supported LBA-Change 00:08:29.056 Read (02h): Supported 00:08:29.056 Compare (05h): Supported 00:08:29.056 Write Zeroes (08h): Supported LBA-Change 00:08:29.056 Dataset Management (09h): Supported LBA-Change 00:08:29.056 Unknown (0Ch): Supported 00:08:29.056 Unknown (12h): Supported 00:08:29.056 Copy (19h): Supported LBA-Change 00:08:29.056 Unknown (1Dh): Supported LBA-Change 00:08:29.056 00:08:29.056 Error Log 00:08:29.056 ========= 00:08:29.056 00:08:29.056 Arbitration 00:08:29.056 =========== 00:08:29.056 Arbitration Burst: no limit 00:08:29.056 00:08:29.056 Power Management 00:08:29.056 ================ 00:08:29.056 Number of Power States: 1 00:08:29.056 Current Power State: Power State #0 00:08:29.056 Power State #0: 00:08:29.056 Max Power: 25.00 W 00:08:29.056 Non-Operational State: Operational 00:08:29.056 Entry Latency: 16 microseconds 00:08:29.056 Exit Latency: 4 microseconds 00:08:29.056 Relative Read Throughput: 0 00:08:29.056 Relative Read Latency: 0 00:08:29.056 Relative Write Throughput: 0 00:08:29.056 Relative Write Latency: 0 00:08:29.056 Idle Power: Not Reported 00:08:29.056 Active Power: Not Reported 00:08:29.056 Non-Operational Permissive Mode: Not Supported 00:08:29.056 00:08:29.056 Health Information 00:08:29.056 ================== 00:08:29.056 Critical Warnings: 00:08:29.056 Available Spare Space: OK 00:08:29.056 Temperature: OK 00:08:29.056 Device Reliability: OK 00:08:29.056 Read Only: No 00:08:29.056 Volatile Memory Backup: OK 00:08:29.056 Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.056 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:29.056 Available Spare: 0% 00:08:29.056 Available Spare Threshold: 0% 00:08:29.056 Life Percentage Used: 0% 00:08:29.056 Data Units Read: 766 00:08:29.056 Data Units Written: 695 00:08:29.056 Host Read Commands: 38039 00:08:29.056 Host Write Commands: 37462 00:08:29.056 Controller Busy Time: 0 minutes 00:08:29.056 Power Cycles: 0 00:08:29.056 Power On Hours: 0 hours 00:08:29.056 Unsafe Shutdowns: 0 00:08:29.056 Unrecoverable Media Errors: 0 00:08:29.056 Lifetime Error Log Entries: 0 00:08:29.056 Warning Temperature Time: 0 minutes 00:08:29.056 Critical Temperature Time: 0 minutes 00:08:29.056 00:08:29.056 Number of Queues 00:08:29.056 ================ 00:08:29.056 Number of I/O Submission Queues: 64 00:08:29.056 Number of I/O Completion Queues: 64 00:08:29.056 00:08:29.056 ZNS Specific Controller Data 00:08:29.056 ============================ 00:08:29.056 Zone Append Size Limit: 0 00:08:29.056 00:08:29.056 00:08:29.056 Active Namespaces 00:08:29.056 ================= 00:08:29.056 Namespace ID:1 00:08:29.056 Error Recovery Timeout: Unlimited 00:08:29.056 Command Set Identifier: NVM (00h) 00:08:29.056 Deallocate: Supported 00:08:29.056 Deallocated/Unwritten Error: Supported 00:08:29.056 Deallocated Read Value: All 0x00 00:08:29.056 Deallocate in Write Zeroes: Not Supported 00:08:29.056 Deallocated Guard Field: 0xFFFF 00:08:29.056 Flush: Supported 00:08:29.056 Reservation: Not Supported 00:08:29.056 Namespace Sharing Capabilities: Multiple Controllers 00:08:29.056 Size (in LBAs): 262144 (1GiB) 00:08:29.056 Capacity (in LBAs): 262144 (1GiB) 00:08:29.056 Utilization (in LBAs): 262144 (1GiB) 00:08:29.056 Thin Provisioning: Not Supported 00:08:29.056 Per-NS Atomic Units: No 00:08:29.056 Maximum Single Source Range Length: 128 00:08:29.056 Maximum Copy Length: 128 00:08:29.056 Maximum Source Range Count: 128 00:08:29.056 NGUID/EUI64 Never Reused: No 00:08:29.056 Namespace Write Protected: No 00:08:29.056 Endurance group ID: 1 00:08:29.056 Number of LBA Formats: 8 00:08:29.056 Current LBA Format: LBA Format #04 00:08:29.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.056 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.056 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.056 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.056 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.056 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.056 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.056 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.056 00:08:29.056 Get Feature FDP: 00:08:29.056 ================ 00:08:29.056 Enabled: Yes 00:08:29.056 FDP configuration index: 0 00:08:29.056 00:08:29.056 FDP configurations log page 00:08:29.056 =========================== 00:08:29.056 Number of FDP configurations: 1 00:08:29.056 Version: 0 00:08:29.056 Size: 112 00:08:29.056 FDP Configuration Descriptor: 0 00:08:29.056 Descriptor Size: 96 00:08:29.056 Reclaim Group Identifier format: 2 00:08:29.056 FDP Volatile Write Cache: Not Present 00:08:29.056 FDP Configuration: Valid 00:08:29.056 Vendor Specific Size: 0 00:08:29.056 Number of Reclaim Groups: 2 00:08:29.056 Number of Recalim Unit Handles: 8 00:08:29.056 Max Placement Identifiers: 128 00:08:29.056 Number of Namespaces Suppprted: 256 00:08:29.056 Reclaim unit Nominal Size: 6000000 bytes 00:08:29.056 Estimated Reclaim Unit Time Limit: Not Reported 00:08:29.056 RUH Desc #000: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #001: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #002: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #003: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #004: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #005: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #006: RUH Type: Initially Isolated 00:08:29.056 RUH Desc #007: RUH Type: Initially Isolated 00:08:29.056 00:08:29.056 FDP reclaim unit handle usage log page 00:08:29.056 ====================================== 00:08:29.056 Number of Reclaim Unit Handles: 8 00:08:29.056 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:29.056 RUH Usage Desc #001: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #002: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #003: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #004: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #005: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #006: RUH Attributes: Unused 00:08:29.056 RUH Usage Desc #007: RUH Attributes: Unused 00:08:29.056 00:08:29.056 FDP statistics log page 00:08:29.057 ======================= 00:08:29.057 Host bytes with metadata written: 440901632 00:08:29.057 Media bytes with metadata written: 440954880 00:08:29.057 Media bytes erased: 0 00:08:29.057 00:08:29.057 FDP events log page 00:08:29.057 =================== 00:08:29.057 Number of FDP events: 0 00:08:29.057 00:08:29.057 NVM Specific Namespace Data 00:08:29.057 =========================== 00:08:29.057 Logical Block Storage Tag Mask: 0 00:08:29.057 Protection Information Capabilities: 00:08:29.057 16b Guard Protection Information Storage Tag Support: No 00:08:29.057 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.057 Storage Tag Check Read Support: No 00:08:29.057 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.057 00:08:29.057 real 0m1.207s 00:08:29.057 user 0m0.439s 00:08:29.057 sys 0m0.538s 00:08:29.057 15:56:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.057 15:56:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:29.057 ************************************ 00:08:29.057 END TEST nvme_identify 00:08:29.057 ************************************ 00:08:29.057 15:56:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:29.057 15:56:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.057 15:56:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.057 15:56:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.057 ************************************ 00:08:29.057 START TEST nvme_perf 00:08:29.057 ************************************ 00:08:29.057 15:56:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:29.057 15:56:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:30.431 Initializing NVMe Controllers 00:08:30.431 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.431 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.431 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.431 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.431 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:30.431 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:30.431 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:30.431 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:30.431 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:30.431 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:30.431 Initialization complete. Launching workers. 00:08:30.431 ======================================================== 00:08:30.431 Latency(us) 00:08:30.431 Device Information : IOPS MiB/s Average min max 00:08:30.431 PCIE (0000:00:10.0) NSID 1 from core 0: 17978.37 210.68 7128.16 5502.42 32168.18 00:08:30.431 PCIE (0000:00:11.0) NSID 1 from core 0: 17978.37 210.68 7118.39 5568.48 30400.46 00:08:30.431 PCIE (0000:00:13.0) NSID 1 from core 0: 17978.37 210.68 7107.29 5588.28 29031.83 00:08:30.431 PCIE (0000:00:12.0) NSID 1 from core 0: 17978.37 210.68 7095.96 5597.14 27206.58 00:08:30.431 PCIE (0000:00:12.0) NSID 2 from core 0: 17978.37 210.68 7084.95 5605.92 25481.75 00:08:30.431 PCIE (0000:00:12.0) NSID 3 from core 0: 18042.35 211.43 7048.80 5597.76 20290.68 00:08:30.431 ======================================================== 00:08:30.431 Total : 107934.22 1264.85 7097.23 5502.42 32168.18 00:08:30.431 00:08:30.431 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:30.431 ================================================================================= 00:08:30.431 1.00000% : 5696.591us 00:08:30.431 10.00000% : 5923.446us 00:08:30.431 25.00000% : 6200.714us 00:08:30.431 50.00000% : 6604.012us 00:08:30.431 75.00000% : 7360.197us 00:08:30.431 90.00000% : 8872.566us 00:08:30.431 95.00000% : 9931.225us 00:08:30.431 98.00000% : 11090.708us 00:08:30.431 99.00000% : 11796.480us 00:08:30.431 99.50000% : 27222.646us 00:08:30.431 99.90000% : 31860.578us 00:08:30.431 99.99000% : 32263.877us 00:08:30.431 99.99900% : 32263.877us 00:08:30.431 99.99990% : 32263.877us 00:08:30.431 99.99999% : 32263.877us 00:08:30.431 00:08:30.431 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:30.431 ================================================================================= 00:08:30.431 1.00000% : 5747.003us 00:08:30.431 10.00000% : 5973.858us 00:08:30.431 25.00000% : 6225.920us 00:08:30.431 50.00000% : 6553.600us 00:08:30.431 75.00000% : 7410.609us 00:08:30.431 90.00000% : 8721.329us 00:08:30.431 95.00000% : 9981.637us 00:08:30.431 98.00000% : 11090.708us 00:08:30.431 99.00000% : 11796.480us 00:08:30.431 99.50000% : 25306.978us 00:08:30.431 99.90000% : 30045.735us 00:08:30.431 99.99000% : 30449.034us 00:08:30.431 99.99900% : 30449.034us 00:08:30.431 99.99990% : 30449.034us 00:08:30.431 99.99999% : 30449.034us 00:08:30.431 00:08:30.431 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:30.431 ================================================================================= 00:08:30.431 1.00000% : 5747.003us 00:08:30.431 10.00000% : 5973.858us 00:08:30.431 25.00000% : 6200.714us 00:08:30.431 50.00000% : 6553.600us 00:08:30.431 75.00000% : 7410.609us 00:08:30.431 90.00000% : 8721.329us 00:08:30.431 95.00000% : 10132.874us 00:08:30.431 98.00000% : 11040.295us 00:08:30.431 99.00000% : 12098.954us 00:08:30.431 99.50000% : 23895.434us 00:08:30.431 99.90000% : 28634.191us 00:08:30.431 99.99000% : 29037.489us 00:08:30.431 99.99900% : 29037.489us 00:08:30.431 99.99990% : 29037.489us 00:08:30.431 99.99999% : 29037.489us 00:08:30.431 00:08:30.432 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:30.432 ================================================================================= 00:08:30.432 1.00000% : 5772.209us 00:08:30.432 10.00000% : 5999.065us 00:08:30.432 25.00000% : 6225.920us 00:08:30.432 50.00000% : 6553.600us 00:08:30.432 75.00000% : 7360.197us 00:08:30.432 90.00000% : 8771.742us 00:08:30.432 95.00000% : 10082.462us 00:08:30.432 98.00000% : 10989.883us 00:08:30.432 99.00000% : 12098.954us 00:08:30.432 99.50000% : 22080.591us 00:08:30.432 99.90000% : 26819.348us 00:08:30.432 99.99000% : 27222.646us 00:08:30.432 99.99900% : 27222.646us 00:08:30.432 99.99990% : 27222.646us 00:08:30.432 99.99999% : 27222.646us 00:08:30.432 00:08:30.432 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:30.432 ================================================================================= 00:08:30.432 1.00000% : 5772.209us 00:08:30.432 10.00000% : 5999.065us 00:08:30.432 25.00000% : 6200.714us 00:08:30.432 50.00000% : 6553.600us 00:08:30.432 75.00000% : 7360.197us 00:08:30.432 90.00000% : 8872.566us 00:08:30.432 95.00000% : 9981.637us 00:08:30.432 98.00000% : 11090.708us 00:08:30.432 99.00000% : 11998.129us 00:08:30.432 99.50000% : 20366.572us 00:08:30.432 99.90000% : 25105.329us 00:08:30.432 99.99000% : 25508.628us 00:08:30.432 99.99900% : 25508.628us 00:08:30.432 99.99990% : 25508.628us 00:08:30.432 99.99999% : 25508.628us 00:08:30.432 00:08:30.432 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:30.432 ================================================================================= 00:08:30.432 1.00000% : 5747.003us 00:08:30.432 10.00000% : 5999.065us 00:08:30.432 25.00000% : 6225.920us 00:08:30.432 50.00000% : 6553.600us 00:08:30.432 75.00000% : 7360.197us 00:08:30.432 90.00000% : 8922.978us 00:08:30.432 95.00000% : 9931.225us 00:08:30.432 98.00000% : 11090.708us 00:08:30.432 99.00000% : 11998.129us 00:08:30.432 99.50000% : 14922.043us 00:08:30.432 99.90000% : 19963.274us 00:08:30.432 99.99000% : 20366.572us 00:08:30.432 99.99900% : 20366.572us 00:08:30.432 99.99990% : 20366.572us 00:08:30.432 99.99999% : 20366.572us 00:08:30.432 00:08:30.432 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:30.432 ============================================================================== 00:08:30.432 Range in us Cumulative IO count 00:08:30.432 5494.942 - 5520.148: 0.0222% ( 4) 00:08:30.432 5520.148 - 5545.354: 0.0556% ( 6) 00:08:30.432 5545.354 - 5570.560: 0.1168% ( 11) 00:08:30.432 5570.560 - 5595.766: 0.2169% ( 18) 00:08:30.432 5595.766 - 5620.972: 0.3781% ( 29) 00:08:30.432 5620.972 - 5646.178: 0.6395% ( 47) 00:08:30.432 5646.178 - 5671.385: 0.9230% ( 51) 00:08:30.432 5671.385 - 5696.591: 1.5180% ( 107) 00:08:30.432 5696.591 - 5721.797: 2.0463% ( 95) 00:08:30.432 5721.797 - 5747.003: 2.7802% ( 132) 00:08:30.432 5747.003 - 5772.209: 3.6032% ( 148) 00:08:30.432 5772.209 - 5797.415: 4.5374% ( 168) 00:08:30.432 5797.415 - 5822.622: 5.6661% ( 203) 00:08:30.432 5822.622 - 5847.828: 6.6559% ( 178) 00:08:30.432 5847.828 - 5873.034: 7.8125% ( 208) 00:08:30.432 5873.034 - 5898.240: 8.8634% ( 189) 00:08:30.432 5898.240 - 5923.446: 10.1201% ( 226) 00:08:30.432 5923.446 - 5948.652: 11.2656% ( 206) 00:08:30.432 5948.652 - 5973.858: 12.6168% ( 243) 00:08:30.432 5973.858 - 5999.065: 13.8345% ( 219) 00:08:30.432 5999.065 - 6024.271: 15.2858% ( 261) 00:08:30.432 6024.271 - 6049.477: 16.6759% ( 250) 00:08:30.432 6049.477 - 6074.683: 18.1606% ( 267) 00:08:30.432 6074.683 - 6099.889: 19.6174% ( 262) 00:08:30.432 6099.889 - 6125.095: 21.1466% ( 275) 00:08:30.432 6125.095 - 6150.302: 22.6479% ( 270) 00:08:30.432 6150.302 - 6175.508: 24.2327% ( 285) 00:08:30.432 6175.508 - 6200.714: 25.8452% ( 290) 00:08:30.432 6200.714 - 6225.920: 27.2298% ( 249) 00:08:30.432 6225.920 - 6251.126: 28.8979% ( 300) 00:08:30.432 6251.126 - 6276.332: 30.4827% ( 285) 00:08:30.432 6276.332 - 6301.538: 32.2620% ( 320) 00:08:30.432 6301.538 - 6326.745: 33.8746% ( 290) 00:08:30.432 6326.745 - 6351.951: 35.5371% ( 299) 00:08:30.432 6351.951 - 6377.157: 37.2442% ( 307) 00:08:30.432 6377.157 - 6402.363: 38.8846% ( 295) 00:08:30.432 6402.363 - 6427.569: 40.6750% ( 322) 00:08:30.432 6427.569 - 6452.775: 42.3321% ( 298) 00:08:30.432 6452.775 - 6503.188: 45.8185% ( 627) 00:08:30.432 6503.188 - 6553.600: 49.2938% ( 625) 00:08:30.432 6553.600 - 6604.012: 52.6301% ( 600) 00:08:30.432 6604.012 - 6654.425: 55.7774% ( 566) 00:08:30.432 6654.425 - 6704.837: 58.6188% ( 511) 00:08:30.432 6704.837 - 6755.249: 61.0209% ( 432) 00:08:30.432 6755.249 - 6805.662: 63.1450% ( 382) 00:08:30.432 6805.662 - 6856.074: 64.9021% ( 316) 00:08:30.432 6856.074 - 6906.486: 66.3979% ( 269) 00:08:30.432 6906.486 - 6956.898: 67.7769% ( 248) 00:08:30.432 6956.898 - 7007.311: 68.9891% ( 218) 00:08:30.432 7007.311 - 7057.723: 70.1290% ( 205) 00:08:30.432 7057.723 - 7108.135: 71.1966% ( 192) 00:08:30.432 7108.135 - 7158.548: 72.1197% ( 166) 00:08:30.432 7158.548 - 7208.960: 72.9148% ( 143) 00:08:30.432 7208.960 - 7259.372: 73.7600% ( 152) 00:08:30.432 7259.372 - 7309.785: 74.5996% ( 151) 00:08:30.432 7309.785 - 7360.197: 75.3503% ( 135) 00:08:30.432 7360.197 - 7410.609: 76.1399% ( 142) 00:08:30.432 7410.609 - 7461.022: 76.8238% ( 123) 00:08:30.432 7461.022 - 7511.434: 77.4800% ( 118) 00:08:30.432 7511.434 - 7561.846: 78.0694% ( 106) 00:08:30.432 7561.846 - 7612.258: 78.7033% ( 114) 00:08:30.432 7612.258 - 7662.671: 79.2538% ( 99) 00:08:30.432 7662.671 - 7713.083: 79.8043% ( 99) 00:08:30.432 7713.083 - 7763.495: 80.3770% ( 103) 00:08:30.432 7763.495 - 7813.908: 80.9442% ( 102) 00:08:30.432 7813.908 - 7864.320: 81.5614% ( 111) 00:08:30.432 7864.320 - 7914.732: 82.1730% ( 110) 00:08:30.432 7914.732 - 7965.145: 82.7513% ( 104) 00:08:30.432 7965.145 - 8015.557: 83.2963% ( 98) 00:08:30.432 8015.557 - 8065.969: 83.8356% ( 97) 00:08:30.432 8065.969 - 8116.382: 84.3305% ( 89) 00:08:30.432 8116.382 - 8166.794: 84.8254% ( 89) 00:08:30.432 8166.794 - 8217.206: 85.2980% ( 85) 00:08:30.432 8217.206 - 8267.618: 85.8096% ( 92) 00:08:30.432 8267.618 - 8318.031: 86.2433% ( 78) 00:08:30.432 8318.031 - 8368.443: 86.6770% ( 78) 00:08:30.432 8368.443 - 8418.855: 87.0885% ( 74) 00:08:30.432 8418.855 - 8469.268: 87.5222% ( 78) 00:08:30.432 8469.268 - 8519.680: 87.9170% ( 71) 00:08:30.432 8519.680 - 8570.092: 88.3007% ( 69) 00:08:30.432 8570.092 - 8620.505: 88.6733% ( 67) 00:08:30.432 8620.505 - 8670.917: 88.9735% ( 54) 00:08:30.432 8670.917 - 8721.329: 89.2571% ( 51) 00:08:30.432 8721.329 - 8771.742: 89.5129% ( 46) 00:08:30.432 8771.742 - 8822.154: 89.8076% ( 53) 00:08:30.432 8822.154 - 8872.566: 90.0801% ( 49) 00:08:30.432 8872.566 - 8922.978: 90.3581% ( 50) 00:08:30.432 8922.978 - 8973.391: 90.6472% ( 52) 00:08:30.432 8973.391 - 9023.803: 90.8919% ( 44) 00:08:30.432 9023.803 - 9074.215: 91.1866% ( 53) 00:08:30.432 9074.215 - 9124.628: 91.4591% ( 49) 00:08:30.432 9124.628 - 9175.040: 91.7093% ( 45) 00:08:30.432 9175.040 - 9225.452: 91.9818% ( 49) 00:08:30.432 9225.452 - 9275.865: 92.2820% ( 54) 00:08:30.432 9275.865 - 9326.277: 92.5489% ( 48) 00:08:30.432 9326.277 - 9376.689: 92.7714% ( 40) 00:08:30.432 9376.689 - 9427.102: 93.0105% ( 43) 00:08:30.432 9427.102 - 9477.514: 93.2496% ( 43) 00:08:30.432 9477.514 - 9527.926: 93.4664% ( 39) 00:08:30.432 9527.926 - 9578.338: 93.6833% ( 39) 00:08:30.432 9578.338 - 9628.751: 93.9168% ( 42) 00:08:30.432 9628.751 - 9679.163: 94.1337% ( 39) 00:08:30.432 9679.163 - 9729.575: 94.3172% ( 33) 00:08:30.432 9729.575 - 9779.988: 94.5173% ( 36) 00:08:30.432 9779.988 - 9830.400: 94.7064% ( 34) 00:08:30.432 9830.400 - 9880.812: 94.8732% ( 30) 00:08:30.432 9880.812 - 9931.225: 95.0400% ( 30) 00:08:30.432 9931.225 - 9981.637: 95.2291% ( 34) 00:08:30.432 9981.637 - 10032.049: 95.3959% ( 30) 00:08:30.432 10032.049 - 10082.462: 95.5460% ( 27) 00:08:30.432 10082.462 - 10132.874: 95.7240% ( 32) 00:08:30.432 10132.874 - 10183.286: 95.8908% ( 30) 00:08:30.432 10183.286 - 10233.698: 96.0298% ( 25) 00:08:30.432 10233.698 - 10284.111: 96.1577% ( 23) 00:08:30.432 10284.111 - 10334.523: 96.3412% ( 33) 00:08:30.432 10334.523 - 10384.935: 96.4580% ( 21) 00:08:30.432 10384.935 - 10435.348: 96.5636% ( 19) 00:08:30.432 10435.348 - 10485.760: 96.6748% ( 20) 00:08:30.432 10485.760 - 10536.172: 96.7694% ( 17) 00:08:30.432 10536.172 - 10586.585: 96.8750% ( 19) 00:08:30.432 10586.585 - 10636.997: 96.9640% ( 16) 00:08:30.432 10636.997 - 10687.409: 97.0529% ( 16) 00:08:30.432 10687.409 - 10737.822: 97.1363% ( 15) 00:08:30.432 10737.822 - 10788.234: 97.2531% ( 21) 00:08:30.432 10788.234 - 10838.646: 97.3588% ( 19) 00:08:30.432 10838.646 - 10889.058: 97.4755% ( 21) 00:08:30.432 10889.058 - 10939.471: 97.5923% ( 21) 00:08:30.432 10939.471 - 10989.883: 97.7313% ( 25) 00:08:30.432 10989.883 - 11040.295: 97.8981% ( 30) 00:08:30.432 11040.295 - 11090.708: 98.0316% ( 24) 00:08:30.432 11090.708 - 11141.120: 98.1595% ( 23) 00:08:30.432 11141.120 - 11191.532: 98.2874% ( 23) 00:08:30.432 11191.532 - 11241.945: 98.3652% ( 14) 00:08:30.432 11241.945 - 11292.357: 98.4709% ( 19) 00:08:30.432 11292.357 - 11342.769: 98.5431% ( 13) 00:08:30.432 11342.769 - 11393.182: 98.5876% ( 8) 00:08:30.432 11393.182 - 11443.594: 98.6766% ( 16) 00:08:30.432 11443.594 - 11494.006: 98.7378% ( 11) 00:08:30.432 11494.006 - 11544.418: 98.8045% ( 12) 00:08:30.432 11544.418 - 11594.831: 98.8545% ( 9) 00:08:30.432 11594.831 - 11645.243: 98.9213% ( 12) 00:08:30.432 11645.243 - 11695.655: 98.9713% ( 9) 00:08:30.433 11695.655 - 11746.068: 98.9991% ( 5) 00:08:30.433 11746.068 - 11796.480: 99.0436% ( 8) 00:08:30.433 11796.480 - 11846.892: 99.1048% ( 11) 00:08:30.433 11846.892 - 11897.305: 99.1214% ( 3) 00:08:30.433 11897.305 - 11947.717: 99.1604% ( 7) 00:08:30.433 11947.717 - 11998.129: 99.1826% ( 4) 00:08:30.433 11998.129 - 12048.542: 99.1993% ( 3) 00:08:30.433 12048.542 - 12098.954: 99.2048% ( 1) 00:08:30.433 12098.954 - 12149.366: 99.2160% ( 2) 00:08:30.433 12149.366 - 12199.778: 99.2271% ( 2) 00:08:30.433 12199.778 - 12250.191: 99.2327% ( 1) 00:08:30.433 12250.191 - 12300.603: 99.2438% ( 2) 00:08:30.433 12300.603 - 12351.015: 99.2493% ( 1) 00:08:30.433 12351.015 - 12401.428: 99.2716% ( 4) 00:08:30.433 12401.428 - 12451.840: 99.2771% ( 1) 00:08:30.433 12451.840 - 12502.252: 99.2883% ( 2) 00:08:30.433 26012.751 - 26214.400: 99.3216% ( 6) 00:08:30.433 26214.400 - 26416.049: 99.3661% ( 8) 00:08:30.433 26416.049 - 26617.698: 99.4106% ( 8) 00:08:30.433 26617.698 - 26819.348: 99.4551% ( 8) 00:08:30.433 26819.348 - 27020.997: 99.4996% ( 8) 00:08:30.433 27020.997 - 27222.646: 99.5440% ( 8) 00:08:30.433 27222.646 - 27424.295: 99.5885% ( 8) 00:08:30.433 27424.295 - 27625.945: 99.6330% ( 8) 00:08:30.433 27625.945 - 27827.594: 99.6441% ( 2) 00:08:30.433 30449.034 - 30650.683: 99.6664% ( 4) 00:08:30.433 30650.683 - 30852.332: 99.7109% ( 8) 00:08:30.433 30852.332 - 31053.982: 99.7553% ( 8) 00:08:30.433 31053.982 - 31255.631: 99.7998% ( 8) 00:08:30.433 31255.631 - 31457.280: 99.8499% ( 9) 00:08:30.433 31457.280 - 31658.929: 99.8944% ( 8) 00:08:30.433 31658.929 - 31860.578: 99.9333% ( 7) 00:08:30.433 31860.578 - 32062.228: 99.9833% ( 9) 00:08:30.433 32062.228 - 32263.877: 100.0000% ( 3) 00:08:30.433 00:08:30.433 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:30.433 ============================================================================== 00:08:30.433 Range in us Cumulative IO count 00:08:30.433 5545.354 - 5570.560: 0.0056% ( 1) 00:08:30.433 5570.560 - 5595.766: 0.0278% ( 4) 00:08:30.433 5595.766 - 5620.972: 0.0612% ( 6) 00:08:30.433 5620.972 - 5646.178: 0.1335% ( 13) 00:08:30.433 5646.178 - 5671.385: 0.2502% ( 21) 00:08:30.433 5671.385 - 5696.591: 0.4226% ( 31) 00:08:30.433 5696.591 - 5721.797: 0.6784% ( 46) 00:08:30.433 5721.797 - 5747.003: 1.1065% ( 77) 00:08:30.433 5747.003 - 5772.209: 1.6681% ( 101) 00:08:30.433 5772.209 - 5797.415: 2.2965% ( 113) 00:08:30.433 5797.415 - 5822.622: 3.0472% ( 135) 00:08:30.433 5822.622 - 5847.828: 3.9368% ( 160) 00:08:30.433 5847.828 - 5873.034: 4.9878% ( 189) 00:08:30.433 5873.034 - 5898.240: 6.1054% ( 201) 00:08:30.433 5898.240 - 5923.446: 7.3009% ( 215) 00:08:30.433 5923.446 - 5948.652: 8.7689% ( 264) 00:08:30.433 5948.652 - 5973.858: 10.1590% ( 250) 00:08:30.433 5973.858 - 5999.065: 11.5770% ( 255) 00:08:30.433 5999.065 - 6024.271: 13.0950% ( 273) 00:08:30.433 6024.271 - 6049.477: 14.5518% ( 262) 00:08:30.433 6049.477 - 6074.683: 16.1755% ( 292) 00:08:30.433 6074.683 - 6099.889: 17.8158% ( 295) 00:08:30.433 6099.889 - 6125.095: 19.5062% ( 304) 00:08:30.433 6125.095 - 6150.302: 21.1911% ( 303) 00:08:30.433 6150.302 - 6175.508: 22.9204% ( 311) 00:08:30.433 6175.508 - 6200.714: 24.7553% ( 330) 00:08:30.433 6200.714 - 6225.920: 26.5736% ( 327) 00:08:30.433 6225.920 - 6251.126: 28.3975% ( 328) 00:08:30.433 6251.126 - 6276.332: 30.2658% ( 336) 00:08:30.433 6276.332 - 6301.538: 32.0062% ( 313) 00:08:30.433 6301.538 - 6326.745: 33.9024% ( 341) 00:08:30.433 6326.745 - 6351.951: 35.7985% ( 341) 00:08:30.433 6351.951 - 6377.157: 37.7113% ( 344) 00:08:30.433 6377.157 - 6402.363: 39.7742% ( 371) 00:08:30.433 6402.363 - 6427.569: 41.8261% ( 369) 00:08:30.433 6427.569 - 6452.775: 43.8334% ( 361) 00:08:30.433 6452.775 - 6503.188: 47.7258% ( 700) 00:08:30.433 6503.188 - 6553.600: 51.4235% ( 665) 00:08:30.433 6553.600 - 6604.012: 54.7653% ( 601) 00:08:30.433 6604.012 - 6654.425: 57.6790% ( 524) 00:08:30.433 6654.425 - 6704.837: 60.0701% ( 430) 00:08:30.433 6704.837 - 6755.249: 62.0718% ( 360) 00:08:30.433 6755.249 - 6805.662: 63.7289% ( 298) 00:08:30.433 6805.662 - 6856.074: 65.2413% ( 272) 00:08:30.433 6856.074 - 6906.486: 66.6036% ( 245) 00:08:30.433 6906.486 - 6956.898: 67.8381% ( 222) 00:08:30.433 6956.898 - 7007.311: 68.9947% ( 208) 00:08:30.433 7007.311 - 7057.723: 69.8899% ( 161) 00:08:30.433 7057.723 - 7108.135: 70.8018% ( 164) 00:08:30.433 7108.135 - 7158.548: 71.6248% ( 148) 00:08:30.433 7158.548 - 7208.960: 72.4255% ( 144) 00:08:30.433 7208.960 - 7259.372: 73.0816% ( 118) 00:08:30.433 7259.372 - 7309.785: 73.7044% ( 112) 00:08:30.433 7309.785 - 7360.197: 74.3105% ( 109) 00:08:30.433 7360.197 - 7410.609: 75.0667% ( 136) 00:08:30.433 7410.609 - 7461.022: 75.7896% ( 130) 00:08:30.433 7461.022 - 7511.434: 76.5069% ( 129) 00:08:30.433 7511.434 - 7561.846: 77.2520% ( 134) 00:08:30.433 7561.846 - 7612.258: 77.9415% ( 124) 00:08:30.433 7612.258 - 7662.671: 78.6866% ( 134) 00:08:30.433 7662.671 - 7713.083: 79.4373% ( 135) 00:08:30.433 7713.083 - 7763.495: 80.1991% ( 137) 00:08:30.433 7763.495 - 7813.908: 80.9331% ( 132) 00:08:30.433 7813.908 - 7864.320: 81.6114% ( 122) 00:08:30.433 7864.320 - 7914.732: 82.3454% ( 132) 00:08:30.433 7914.732 - 7965.145: 82.9904% ( 116) 00:08:30.433 7965.145 - 8015.557: 83.6410% ( 117) 00:08:30.433 8015.557 - 8065.969: 84.2193% ( 104) 00:08:30.433 8065.969 - 8116.382: 84.8254% ( 109) 00:08:30.433 8116.382 - 8166.794: 85.4037% ( 104) 00:08:30.433 8166.794 - 8217.206: 85.9597% ( 100) 00:08:30.433 8217.206 - 8267.618: 86.5436% ( 105) 00:08:30.433 8267.618 - 8318.031: 87.1052% ( 101) 00:08:30.433 8318.031 - 8368.443: 87.6335% ( 95) 00:08:30.433 8368.443 - 8418.855: 88.1117% ( 86) 00:08:30.433 8418.855 - 8469.268: 88.5565% ( 80) 00:08:30.433 8469.268 - 8519.680: 88.8957% ( 61) 00:08:30.433 8519.680 - 8570.092: 89.1793% ( 51) 00:08:30.433 8570.092 - 8620.505: 89.4851% ( 55) 00:08:30.433 8620.505 - 8670.917: 89.8020% ( 57) 00:08:30.433 8670.917 - 8721.329: 90.0968% ( 53) 00:08:30.433 8721.329 - 8771.742: 90.3637% ( 48) 00:08:30.433 8771.742 - 8822.154: 90.5916% ( 41) 00:08:30.433 8822.154 - 8872.566: 90.8419% ( 45) 00:08:30.433 8872.566 - 8922.978: 91.0921% ( 45) 00:08:30.433 8922.978 - 8973.391: 91.3089% ( 39) 00:08:30.433 8973.391 - 9023.803: 91.5258% ( 39) 00:08:30.433 9023.803 - 9074.215: 91.7260% ( 36) 00:08:30.433 9074.215 - 9124.628: 91.8817% ( 28) 00:08:30.433 9124.628 - 9175.040: 92.0819% ( 36) 00:08:30.433 9175.040 - 9225.452: 92.2765% ( 35) 00:08:30.433 9225.452 - 9275.865: 92.4655% ( 34) 00:08:30.433 9275.865 - 9326.277: 92.6546% ( 34) 00:08:30.433 9326.277 - 9376.689: 92.8270% ( 31) 00:08:30.433 9376.689 - 9427.102: 92.9771% ( 27) 00:08:30.433 9427.102 - 9477.514: 93.1884% ( 38) 00:08:30.433 9477.514 - 9527.926: 93.3663% ( 32) 00:08:30.433 9527.926 - 9578.338: 93.5943% ( 41) 00:08:30.433 9578.338 - 9628.751: 93.8056% ( 38) 00:08:30.433 9628.751 - 9679.163: 93.9947% ( 34) 00:08:30.433 9679.163 - 9729.575: 94.2171% ( 40) 00:08:30.433 9729.575 - 9779.988: 94.4228% ( 37) 00:08:30.433 9779.988 - 9830.400: 94.6008% ( 32) 00:08:30.433 9830.400 - 9880.812: 94.7731% ( 31) 00:08:30.433 9880.812 - 9931.225: 94.9455% ( 31) 00:08:30.433 9931.225 - 9981.637: 95.1179% ( 31) 00:08:30.433 9981.637 - 10032.049: 95.2569% ( 25) 00:08:30.433 10032.049 - 10082.462: 95.3959% ( 25) 00:08:30.433 10082.462 - 10132.874: 95.5238% ( 23) 00:08:30.433 10132.874 - 10183.286: 95.6406% ( 21) 00:08:30.433 10183.286 - 10233.698: 95.7407% ( 18) 00:08:30.433 10233.698 - 10284.111: 95.8685% ( 23) 00:08:30.433 10284.111 - 10334.523: 95.9964% ( 23) 00:08:30.433 10334.523 - 10384.935: 96.1355% ( 25) 00:08:30.433 10384.935 - 10435.348: 96.2633% ( 23) 00:08:30.433 10435.348 - 10485.760: 96.3912% ( 23) 00:08:30.433 10485.760 - 10536.172: 96.4913% ( 18) 00:08:30.433 10536.172 - 10586.585: 96.6081% ( 21) 00:08:30.433 10586.585 - 10636.997: 96.7415% ( 24) 00:08:30.433 10636.997 - 10687.409: 96.8806% ( 25) 00:08:30.433 10687.409 - 10737.822: 97.0307% ( 27) 00:08:30.433 10737.822 - 10788.234: 97.1975% ( 30) 00:08:30.433 10788.234 - 10838.646: 97.3421% ( 26) 00:08:30.433 10838.646 - 10889.058: 97.4922% ( 27) 00:08:30.433 10889.058 - 10939.471: 97.6479% ( 28) 00:08:30.433 10939.471 - 10989.883: 97.7758% ( 23) 00:08:30.433 10989.883 - 11040.295: 97.9037% ( 23) 00:08:30.433 11040.295 - 11090.708: 98.0316% ( 23) 00:08:30.433 11090.708 - 11141.120: 98.1595% ( 23) 00:08:30.433 11141.120 - 11191.532: 98.2651% ( 19) 00:08:30.433 11191.532 - 11241.945: 98.3819% ( 21) 00:08:30.433 11241.945 - 11292.357: 98.4709% ( 16) 00:08:30.433 11292.357 - 11342.769: 98.5543% ( 15) 00:08:30.433 11342.769 - 11393.182: 98.6154% ( 11) 00:08:30.433 11393.182 - 11443.594: 98.6877% ( 13) 00:08:30.433 11443.594 - 11494.006: 98.7656% ( 14) 00:08:30.433 11494.006 - 11544.418: 98.8323% ( 12) 00:08:30.433 11544.418 - 11594.831: 98.8935% ( 11) 00:08:30.433 11594.831 - 11645.243: 98.9324% ( 7) 00:08:30.433 11645.243 - 11695.655: 98.9602% ( 5) 00:08:30.433 11695.655 - 11746.068: 98.9880% ( 5) 00:08:30.433 11746.068 - 11796.480: 99.0158% ( 5) 00:08:30.433 11796.480 - 11846.892: 99.0492% ( 6) 00:08:30.433 11846.892 - 11897.305: 99.0770% ( 5) 00:08:30.433 11897.305 - 11947.717: 99.1103% ( 6) 00:08:30.433 11947.717 - 11998.129: 99.1492% ( 7) 00:08:30.433 11998.129 - 12048.542: 99.1826% ( 6) 00:08:30.433 12048.542 - 12098.954: 99.2160% ( 6) 00:08:30.433 12098.954 - 12149.366: 99.2493% ( 6) 00:08:30.433 12149.366 - 12199.778: 99.2716% ( 4) 00:08:30.434 12199.778 - 12250.191: 99.2883% ( 3) 00:08:30.434 24298.732 - 24399.557: 99.2994% ( 2) 00:08:30.434 24399.557 - 24500.382: 99.3216% ( 4) 00:08:30.434 24500.382 - 24601.206: 99.3439% ( 4) 00:08:30.434 24601.206 - 24702.031: 99.3661% ( 4) 00:08:30.434 24702.031 - 24802.855: 99.3939% ( 5) 00:08:30.434 24802.855 - 24903.680: 99.4161% ( 4) 00:08:30.434 24903.680 - 25004.505: 99.4384% ( 4) 00:08:30.434 25004.505 - 25105.329: 99.4606% ( 4) 00:08:30.434 25105.329 - 25206.154: 99.4829% ( 4) 00:08:30.434 25206.154 - 25306.978: 99.5107% ( 5) 00:08:30.434 25306.978 - 25407.803: 99.5329% ( 4) 00:08:30.434 25407.803 - 25508.628: 99.5552% ( 4) 00:08:30.434 25508.628 - 25609.452: 99.5774% ( 4) 00:08:30.434 25609.452 - 25710.277: 99.6052% ( 5) 00:08:30.434 25710.277 - 25811.102: 99.6274% ( 4) 00:08:30.434 25811.102 - 26012.751: 99.6441% ( 3) 00:08:30.434 28835.840 - 29037.489: 99.6886% ( 8) 00:08:30.434 29037.489 - 29239.138: 99.7275% ( 7) 00:08:30.434 29239.138 - 29440.788: 99.7720% ( 8) 00:08:30.434 29440.788 - 29642.437: 99.8221% ( 9) 00:08:30.434 29642.437 - 29844.086: 99.8610% ( 7) 00:08:30.434 29844.086 - 30045.735: 99.9110% ( 9) 00:08:30.434 30045.735 - 30247.385: 99.9611% ( 9) 00:08:30.434 30247.385 - 30449.034: 100.0000% ( 7) 00:08:30.434 00:08:30.434 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:30.434 ============================================================================== 00:08:30.434 Range in us Cumulative IO count 00:08:30.434 5570.560 - 5595.766: 0.0111% ( 2) 00:08:30.434 5595.766 - 5620.972: 0.0334% ( 4) 00:08:30.434 5620.972 - 5646.178: 0.0890% ( 10) 00:08:30.434 5646.178 - 5671.385: 0.2224% ( 24) 00:08:30.434 5671.385 - 5696.591: 0.3948% ( 31) 00:08:30.434 5696.591 - 5721.797: 0.6339% ( 43) 00:08:30.434 5721.797 - 5747.003: 1.0287% ( 71) 00:08:30.434 5747.003 - 5772.209: 1.4958% ( 84) 00:08:30.434 5772.209 - 5797.415: 2.2464% ( 135) 00:08:30.434 5797.415 - 5822.622: 3.0360% ( 142) 00:08:30.434 5822.622 - 5847.828: 3.9646% ( 167) 00:08:30.434 5847.828 - 5873.034: 4.9766% ( 182) 00:08:30.434 5873.034 - 5898.240: 6.1054% ( 203) 00:08:30.434 5898.240 - 5923.446: 7.3677% ( 227) 00:08:30.434 5923.446 - 5948.652: 8.6744% ( 235) 00:08:30.434 5948.652 - 5973.858: 10.1312% ( 262) 00:08:30.434 5973.858 - 5999.065: 11.6548% ( 274) 00:08:30.434 5999.065 - 6024.271: 13.2395% ( 285) 00:08:30.434 6024.271 - 6049.477: 14.7742% ( 276) 00:08:30.434 6049.477 - 6074.683: 16.3145% ( 277) 00:08:30.434 6074.683 - 6099.889: 18.0105% ( 305) 00:08:30.434 6099.889 - 6125.095: 19.7843% ( 319) 00:08:30.434 6125.095 - 6150.302: 21.5803% ( 323) 00:08:30.434 6150.302 - 6175.508: 23.4597% ( 338) 00:08:30.434 6175.508 - 6200.714: 25.2947% ( 330) 00:08:30.434 6200.714 - 6225.920: 27.1408% ( 332) 00:08:30.434 6225.920 - 6251.126: 28.9424% ( 324) 00:08:30.434 6251.126 - 6276.332: 30.8052% ( 335) 00:08:30.434 6276.332 - 6301.538: 32.7347% ( 347) 00:08:30.434 6301.538 - 6326.745: 34.6197% ( 339) 00:08:30.434 6326.745 - 6351.951: 36.5047% ( 339) 00:08:30.434 6351.951 - 6377.157: 38.4397% ( 348) 00:08:30.434 6377.157 - 6402.363: 40.3970% ( 352) 00:08:30.434 6402.363 - 6427.569: 42.3710% ( 355) 00:08:30.434 6427.569 - 6452.775: 44.4173% ( 368) 00:08:30.434 6452.775 - 6503.188: 48.2373% ( 687) 00:08:30.434 6503.188 - 6553.600: 51.9740% ( 672) 00:08:30.434 6553.600 - 6604.012: 55.2269% ( 585) 00:08:30.434 6604.012 - 6654.425: 58.0961% ( 516) 00:08:30.434 6654.425 - 6704.837: 60.4815% ( 429) 00:08:30.434 6704.837 - 6755.249: 62.4778% ( 359) 00:08:30.434 6755.249 - 6805.662: 64.1459% ( 300) 00:08:30.434 6805.662 - 6856.074: 65.6584% ( 272) 00:08:30.434 6856.074 - 6906.486: 67.0819% ( 256) 00:08:30.434 6906.486 - 6956.898: 68.2940% ( 218) 00:08:30.434 6956.898 - 7007.311: 69.3339% ( 187) 00:08:30.434 7007.311 - 7057.723: 70.3403% ( 181) 00:08:30.434 7057.723 - 7108.135: 71.2411% ( 162) 00:08:30.434 7108.135 - 7158.548: 72.0752% ( 150) 00:08:30.434 7158.548 - 7208.960: 72.8425% ( 138) 00:08:30.434 7208.960 - 7259.372: 73.4153% ( 103) 00:08:30.434 7259.372 - 7309.785: 74.0492% ( 114) 00:08:30.434 7309.785 - 7360.197: 74.6664% ( 111) 00:08:30.434 7360.197 - 7410.609: 75.3169% ( 117) 00:08:30.434 7410.609 - 7461.022: 76.0621% ( 134) 00:08:30.434 7461.022 - 7511.434: 76.7738% ( 128) 00:08:30.434 7511.434 - 7561.846: 77.4911% ( 129) 00:08:30.434 7561.846 - 7612.258: 78.1806% ( 124) 00:08:30.434 7612.258 - 7662.671: 78.8868% ( 127) 00:08:30.434 7662.671 - 7713.083: 79.6319% ( 134) 00:08:30.434 7713.083 - 7763.495: 80.4104% ( 140) 00:08:30.434 7763.495 - 7813.908: 81.1054% ( 125) 00:08:30.434 7813.908 - 7864.320: 81.7838% ( 122) 00:08:30.434 7864.320 - 7914.732: 82.3788% ( 107) 00:08:30.434 7914.732 - 7965.145: 82.9571% ( 104) 00:08:30.434 7965.145 - 8015.557: 83.5242% ( 102) 00:08:30.434 8015.557 - 8065.969: 84.1192% ( 107) 00:08:30.434 8065.969 - 8116.382: 84.6919% ( 103) 00:08:30.434 8116.382 - 8166.794: 85.3203% ( 113) 00:08:30.434 8166.794 - 8217.206: 85.8708% ( 99) 00:08:30.434 8217.206 - 8267.618: 86.4268% ( 100) 00:08:30.434 8267.618 - 8318.031: 86.9718% ( 98) 00:08:30.434 8318.031 - 8368.443: 87.5445% ( 103) 00:08:30.434 8368.443 - 8418.855: 88.0783% ( 96) 00:08:30.434 8418.855 - 8469.268: 88.5565% ( 86) 00:08:30.434 8469.268 - 8519.680: 88.9513% ( 71) 00:08:30.434 8519.680 - 8570.092: 89.3016% ( 63) 00:08:30.434 8570.092 - 8620.505: 89.5741% ( 49) 00:08:30.434 8620.505 - 8670.917: 89.8076% ( 42) 00:08:30.434 8670.917 - 8721.329: 90.0245% ( 39) 00:08:30.434 8721.329 - 8771.742: 90.2636% ( 43) 00:08:30.434 8771.742 - 8822.154: 90.4971% ( 42) 00:08:30.434 8822.154 - 8872.566: 90.7529% ( 46) 00:08:30.434 8872.566 - 8922.978: 90.9920% ( 43) 00:08:30.434 8922.978 - 8973.391: 91.2200% ( 41) 00:08:30.434 8973.391 - 9023.803: 91.4313% ( 38) 00:08:30.434 9023.803 - 9074.215: 91.6148% ( 33) 00:08:30.434 9074.215 - 9124.628: 91.8094% ( 35) 00:08:30.434 9124.628 - 9175.040: 91.9651% ( 28) 00:08:30.434 9175.040 - 9225.452: 92.0874% ( 22) 00:08:30.434 9225.452 - 9275.865: 92.2042% ( 21) 00:08:30.434 9275.865 - 9326.277: 92.3265% ( 22) 00:08:30.434 9326.277 - 9376.689: 92.4377% ( 20) 00:08:30.434 9376.689 - 9427.102: 92.5823% ( 26) 00:08:30.434 9427.102 - 9477.514: 92.7547% ( 31) 00:08:30.434 9477.514 - 9527.926: 92.9493% ( 35) 00:08:30.434 9527.926 - 9578.338: 93.1661% ( 39) 00:08:30.434 9578.338 - 9628.751: 93.3608% ( 35) 00:08:30.434 9628.751 - 9679.163: 93.5609% ( 36) 00:08:30.434 9679.163 - 9729.575: 93.7611% ( 36) 00:08:30.434 9729.575 - 9779.988: 93.9557% ( 35) 00:08:30.434 9779.988 - 9830.400: 94.1114% ( 28) 00:08:30.434 9830.400 - 9880.812: 94.2671% ( 28) 00:08:30.434 9880.812 - 9931.225: 94.4395% ( 31) 00:08:30.434 9931.225 - 9981.637: 94.5785% ( 25) 00:08:30.434 9981.637 - 10032.049: 94.7398% ( 29) 00:08:30.434 10032.049 - 10082.462: 94.9177% ( 32) 00:08:30.434 10082.462 - 10132.874: 95.1012% ( 33) 00:08:30.434 10132.874 - 10183.286: 95.2625% ( 29) 00:08:30.434 10183.286 - 10233.698: 95.4626% ( 36) 00:08:30.434 10233.698 - 10284.111: 95.6573% ( 35) 00:08:30.434 10284.111 - 10334.523: 95.8741% ( 39) 00:08:30.434 10334.523 - 10384.935: 96.0354% ( 29) 00:08:30.434 10384.935 - 10435.348: 96.2133% ( 32) 00:08:30.434 10435.348 - 10485.760: 96.3746% ( 29) 00:08:30.434 10485.760 - 10536.172: 96.5469% ( 31) 00:08:30.434 10536.172 - 10586.585: 96.6971% ( 27) 00:08:30.434 10586.585 - 10636.997: 96.8416% ( 26) 00:08:30.434 10636.997 - 10687.409: 97.0363% ( 35) 00:08:30.434 10687.409 - 10737.822: 97.2031% ( 30) 00:08:30.434 10737.822 - 10788.234: 97.3588% ( 28) 00:08:30.434 10788.234 - 10838.646: 97.5200% ( 29) 00:08:30.434 10838.646 - 10889.058: 97.6646% ( 26) 00:08:30.434 10889.058 - 10939.471: 97.7925% ( 23) 00:08:30.434 10939.471 - 10989.883: 97.9482% ( 28) 00:08:30.434 10989.883 - 11040.295: 98.0816% ( 24) 00:08:30.434 11040.295 - 11090.708: 98.2095% ( 23) 00:08:30.434 11090.708 - 11141.120: 98.3152% ( 19) 00:08:30.434 11141.120 - 11191.532: 98.4041% ( 16) 00:08:30.434 11191.532 - 11241.945: 98.4875% ( 15) 00:08:30.434 11241.945 - 11292.357: 98.5543% ( 12) 00:08:30.434 11292.357 - 11342.769: 98.5988% ( 8) 00:08:30.434 11342.769 - 11393.182: 98.6488% ( 9) 00:08:30.434 11393.182 - 11443.594: 98.6877% ( 7) 00:08:30.434 11443.594 - 11494.006: 98.7211% ( 6) 00:08:30.434 11494.006 - 11544.418: 98.7711% ( 9) 00:08:30.434 11544.418 - 11594.831: 98.7934% ( 4) 00:08:30.434 11594.831 - 11645.243: 98.8101% ( 3) 00:08:30.434 11645.243 - 11695.655: 98.8323% ( 4) 00:08:30.434 11695.655 - 11746.068: 98.8434% ( 2) 00:08:30.434 11746.068 - 11796.480: 98.8601% ( 3) 00:08:30.434 11796.480 - 11846.892: 98.8712% ( 2) 00:08:30.434 11846.892 - 11897.305: 98.8879% ( 3) 00:08:30.434 11897.305 - 11947.717: 98.9213% ( 6) 00:08:30.434 11947.717 - 11998.129: 98.9546% ( 6) 00:08:30.434 11998.129 - 12048.542: 98.9880% ( 6) 00:08:30.434 12048.542 - 12098.954: 99.0158% ( 5) 00:08:30.434 12098.954 - 12149.366: 99.0380% ( 4) 00:08:30.434 12149.366 - 12199.778: 99.0547% ( 3) 00:08:30.434 12199.778 - 12250.191: 99.0770% ( 4) 00:08:30.434 12250.191 - 12300.603: 99.0992% ( 4) 00:08:30.434 12300.603 - 12351.015: 99.1214% ( 4) 00:08:30.434 12351.015 - 12401.428: 99.1381% ( 3) 00:08:30.434 12401.428 - 12451.840: 99.1604% ( 4) 00:08:30.434 12451.840 - 12502.252: 99.1826% ( 4) 00:08:30.435 12502.252 - 12552.665: 99.2048% ( 4) 00:08:30.435 12552.665 - 12603.077: 99.2271% ( 4) 00:08:30.435 12603.077 - 12653.489: 99.2438% ( 3) 00:08:30.435 12653.489 - 12703.902: 99.2660% ( 4) 00:08:30.435 12703.902 - 12754.314: 99.2883% ( 4) 00:08:30.435 22786.363 - 22887.188: 99.2994% ( 2) 00:08:30.435 22887.188 - 22988.012: 99.3216% ( 4) 00:08:30.435 22988.012 - 23088.837: 99.3439% ( 4) 00:08:30.435 23088.837 - 23189.662: 99.3661% ( 4) 00:08:30.435 23189.662 - 23290.486: 99.3883% ( 4) 00:08:30.435 23290.486 - 23391.311: 99.4106% ( 4) 00:08:30.435 23391.311 - 23492.135: 99.4328% ( 4) 00:08:30.435 23492.135 - 23592.960: 99.4551% ( 4) 00:08:30.435 23592.960 - 23693.785: 99.4773% ( 4) 00:08:30.435 23693.785 - 23794.609: 99.4996% ( 4) 00:08:30.435 23794.609 - 23895.434: 99.5162% ( 3) 00:08:30.435 23895.434 - 23996.258: 99.5385% ( 4) 00:08:30.435 23996.258 - 24097.083: 99.5607% ( 4) 00:08:30.435 24097.083 - 24197.908: 99.5830% ( 4) 00:08:30.435 24197.908 - 24298.732: 99.6052% ( 4) 00:08:30.435 24298.732 - 24399.557: 99.6330% ( 5) 00:08:30.435 24399.557 - 24500.382: 99.6441% ( 2) 00:08:30.435 27424.295 - 27625.945: 99.6775% ( 6) 00:08:30.435 27625.945 - 27827.594: 99.7220% ( 8) 00:08:30.435 27827.594 - 28029.243: 99.7665% ( 8) 00:08:30.435 28029.243 - 28230.892: 99.8109% ( 8) 00:08:30.435 28230.892 - 28432.542: 99.8554% ( 8) 00:08:30.435 28432.542 - 28634.191: 99.9055% ( 9) 00:08:30.435 28634.191 - 28835.840: 99.9500% ( 8) 00:08:30.435 28835.840 - 29037.489: 100.0000% ( 9) 00:08:30.435 00:08:30.435 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:30.435 ============================================================================== 00:08:30.435 Range in us Cumulative IO count 00:08:30.435 5595.766 - 5620.972: 0.0334% ( 6) 00:08:30.435 5620.972 - 5646.178: 0.0945% ( 11) 00:08:30.435 5646.178 - 5671.385: 0.2057% ( 20) 00:08:30.435 5671.385 - 5696.591: 0.4059% ( 36) 00:08:30.435 5696.591 - 5721.797: 0.6395% ( 42) 00:08:30.435 5721.797 - 5747.003: 0.9564% ( 57) 00:08:30.435 5747.003 - 5772.209: 1.5236% ( 102) 00:08:30.435 5772.209 - 5797.415: 2.1964% ( 121) 00:08:30.435 5797.415 - 5822.622: 2.9749% ( 140) 00:08:30.435 5822.622 - 5847.828: 3.8868% ( 164) 00:08:30.435 5847.828 - 5873.034: 4.8488% ( 173) 00:08:30.435 5873.034 - 5898.240: 6.0331% ( 213) 00:08:30.435 5898.240 - 5923.446: 7.2676% ( 222) 00:08:30.435 5923.446 - 5948.652: 8.5743% ( 235) 00:08:30.435 5948.652 - 5973.858: 9.9922% ( 255) 00:08:30.435 5973.858 - 5999.065: 11.3990% ( 253) 00:08:30.435 5999.065 - 6024.271: 12.7891% ( 250) 00:08:30.435 6024.271 - 6049.477: 14.2238% ( 258) 00:08:30.435 6049.477 - 6074.683: 15.9030% ( 302) 00:08:30.435 6074.683 - 6099.889: 17.6657% ( 317) 00:08:30.435 6099.889 - 6125.095: 19.4729% ( 325) 00:08:30.435 6125.095 - 6150.302: 21.2856% ( 326) 00:08:30.435 6150.302 - 6175.508: 23.0038% ( 309) 00:08:30.435 6175.508 - 6200.714: 24.7831% ( 320) 00:08:30.435 6200.714 - 6225.920: 26.6014% ( 327) 00:08:30.435 6225.920 - 6251.126: 28.4253% ( 328) 00:08:30.435 6251.126 - 6276.332: 30.2880% ( 335) 00:08:30.435 6276.332 - 6301.538: 32.1897% ( 342) 00:08:30.435 6301.538 - 6326.745: 34.0914% ( 342) 00:08:30.435 6326.745 - 6351.951: 36.0098% ( 345) 00:08:30.435 6351.951 - 6377.157: 37.9337% ( 346) 00:08:30.435 6377.157 - 6402.363: 39.8632% ( 347) 00:08:30.435 6402.363 - 6427.569: 41.8928% ( 365) 00:08:30.435 6427.569 - 6452.775: 43.8501% ( 352) 00:08:30.435 6452.775 - 6503.188: 47.8203% ( 714) 00:08:30.435 6503.188 - 6553.600: 51.5347% ( 668) 00:08:30.435 6553.600 - 6604.012: 54.9766% ( 619) 00:08:30.435 6604.012 - 6654.425: 57.9738% ( 539) 00:08:30.435 6654.425 - 6704.837: 60.4871% ( 452) 00:08:30.435 6704.837 - 6755.249: 62.5723% ( 375) 00:08:30.435 6755.249 - 6805.662: 64.3628% ( 322) 00:08:30.435 6805.662 - 6856.074: 66.0476% ( 303) 00:08:30.435 6856.074 - 6906.486: 67.5823% ( 276) 00:08:30.435 6906.486 - 6956.898: 68.8612% ( 230) 00:08:30.435 6956.898 - 7007.311: 69.9399% ( 194) 00:08:30.435 7007.311 - 7057.723: 70.9575% ( 183) 00:08:30.435 7057.723 - 7108.135: 71.8472% ( 160) 00:08:30.435 7108.135 - 7158.548: 72.6201% ( 139) 00:08:30.435 7158.548 - 7208.960: 73.3152% ( 125) 00:08:30.435 7208.960 - 7259.372: 73.8879% ( 103) 00:08:30.435 7259.372 - 7309.785: 74.4884% ( 108) 00:08:30.435 7309.785 - 7360.197: 75.0890% ( 108) 00:08:30.435 7360.197 - 7410.609: 75.8174% ( 131) 00:08:30.435 7410.609 - 7461.022: 76.4958% ( 122) 00:08:30.435 7461.022 - 7511.434: 77.1074% ( 110) 00:08:30.435 7511.434 - 7561.846: 77.7636% ( 118) 00:08:30.435 7561.846 - 7612.258: 78.4976% ( 132) 00:08:30.435 7612.258 - 7662.671: 79.1259% ( 113) 00:08:30.435 7662.671 - 7713.083: 79.6931% ( 102) 00:08:30.435 7713.083 - 7763.495: 80.2658% ( 103) 00:08:30.435 7763.495 - 7813.908: 80.8163% ( 99) 00:08:30.435 7813.908 - 7864.320: 81.3000% ( 87) 00:08:30.435 7864.320 - 7914.732: 81.8339% ( 96) 00:08:30.435 7914.732 - 7965.145: 82.3621% ( 95) 00:08:30.435 7965.145 - 8015.557: 82.8848% ( 94) 00:08:30.435 8015.557 - 8065.969: 83.4964% ( 110) 00:08:30.435 8065.969 - 8116.382: 84.0747% ( 104) 00:08:30.435 8116.382 - 8166.794: 84.6530% ( 104) 00:08:30.435 8166.794 - 8217.206: 85.2091% ( 100) 00:08:30.435 8217.206 - 8267.618: 85.7596% ( 99) 00:08:30.435 8267.618 - 8318.031: 86.3101% ( 99) 00:08:30.435 8318.031 - 8368.443: 86.9217% ( 110) 00:08:30.435 8368.443 - 8418.855: 87.4944% ( 103) 00:08:30.435 8418.855 - 8469.268: 87.9615% ( 84) 00:08:30.435 8469.268 - 8519.680: 88.4453% ( 87) 00:08:30.435 8519.680 - 8570.092: 88.8790% ( 78) 00:08:30.435 8570.092 - 8620.505: 89.2738% ( 71) 00:08:30.435 8620.505 - 8670.917: 89.6019% ( 59) 00:08:30.435 8670.917 - 8721.329: 89.8743% ( 49) 00:08:30.435 8721.329 - 8771.742: 90.1357% ( 47) 00:08:30.435 8771.742 - 8822.154: 90.3692% ( 42) 00:08:30.435 8822.154 - 8872.566: 90.5750% ( 37) 00:08:30.435 8872.566 - 8922.978: 90.7751% ( 36) 00:08:30.435 8922.978 - 8973.391: 91.0309% ( 46) 00:08:30.435 8973.391 - 9023.803: 91.2478% ( 39) 00:08:30.435 9023.803 - 9074.215: 91.4202% ( 31) 00:08:30.435 9074.215 - 9124.628: 91.6148% ( 35) 00:08:30.435 9124.628 - 9175.040: 91.8038% ( 34) 00:08:30.435 9175.040 - 9225.452: 91.9873% ( 33) 00:08:30.435 9225.452 - 9275.865: 92.1597% ( 31) 00:08:30.435 9275.865 - 9326.277: 92.3432% ( 33) 00:08:30.435 9326.277 - 9376.689: 92.5489% ( 37) 00:08:30.435 9376.689 - 9427.102: 92.7213% ( 31) 00:08:30.435 9427.102 - 9477.514: 92.8826% ( 29) 00:08:30.435 9477.514 - 9527.926: 93.0883% ( 37) 00:08:30.435 9527.926 - 9578.338: 93.2607% ( 31) 00:08:30.435 9578.338 - 9628.751: 93.4219% ( 29) 00:08:30.435 9628.751 - 9679.163: 93.5776% ( 28) 00:08:30.435 9679.163 - 9729.575: 93.7611% ( 33) 00:08:30.435 9729.575 - 9779.988: 93.9335% ( 31) 00:08:30.435 9779.988 - 9830.400: 94.1170% ( 33) 00:08:30.435 9830.400 - 9880.812: 94.2949% ( 32) 00:08:30.435 9880.812 - 9931.225: 94.4673% ( 31) 00:08:30.435 9931.225 - 9981.637: 94.6786% ( 38) 00:08:30.435 9981.637 - 10032.049: 94.8677% ( 34) 00:08:30.435 10032.049 - 10082.462: 95.0845% ( 39) 00:08:30.435 10082.462 - 10132.874: 95.3181% ( 42) 00:08:30.435 10132.874 - 10183.286: 95.5238% ( 37) 00:08:30.435 10183.286 - 10233.698: 95.7240% ( 36) 00:08:30.435 10233.698 - 10284.111: 95.9130% ( 34) 00:08:30.435 10284.111 - 10334.523: 96.1021% ( 34) 00:08:30.435 10334.523 - 10384.935: 96.2133% ( 20) 00:08:30.435 10384.935 - 10435.348: 96.3579% ( 26) 00:08:30.435 10435.348 - 10485.760: 96.5024% ( 26) 00:08:30.435 10485.760 - 10536.172: 96.6693% ( 30) 00:08:30.435 10536.172 - 10586.585: 96.9028% ( 42) 00:08:30.435 10586.585 - 10636.997: 97.0974% ( 35) 00:08:30.435 10636.997 - 10687.409: 97.2809% ( 33) 00:08:30.435 10687.409 - 10737.822: 97.4422% ( 29) 00:08:30.435 10737.822 - 10788.234: 97.6201% ( 32) 00:08:30.435 10788.234 - 10838.646: 97.7536% ( 24) 00:08:30.435 10838.646 - 10889.058: 97.8815% ( 23) 00:08:30.435 10889.058 - 10939.471: 97.9871% ( 19) 00:08:30.435 10939.471 - 10989.883: 98.0983% ( 20) 00:08:30.435 10989.883 - 11040.295: 98.2095% ( 20) 00:08:30.435 11040.295 - 11090.708: 98.2985% ( 16) 00:08:30.435 11090.708 - 11141.120: 98.3819% ( 15) 00:08:30.435 11141.120 - 11191.532: 98.4820% ( 18) 00:08:30.435 11191.532 - 11241.945: 98.5765% ( 17) 00:08:30.435 11241.945 - 11292.357: 98.6599% ( 15) 00:08:30.435 11292.357 - 11342.769: 98.7600% ( 18) 00:08:30.435 11342.769 - 11393.182: 98.8212% ( 11) 00:08:30.435 11393.182 - 11443.594: 98.8768% ( 10) 00:08:30.435 11443.594 - 11494.006: 98.9101% ( 6) 00:08:30.435 11494.006 - 11544.418: 98.9268% ( 3) 00:08:30.435 11544.418 - 11594.831: 98.9324% ( 1) 00:08:30.435 11846.892 - 11897.305: 98.9379% ( 1) 00:08:30.435 11897.305 - 11947.717: 98.9546% ( 3) 00:08:30.435 11947.717 - 11998.129: 98.9769% ( 4) 00:08:30.435 11998.129 - 12048.542: 98.9935% ( 3) 00:08:30.435 12048.542 - 12098.954: 99.0158% ( 4) 00:08:30.435 12098.954 - 12149.366: 99.0380% ( 4) 00:08:30.435 12149.366 - 12199.778: 99.0547% ( 3) 00:08:30.435 12199.778 - 12250.191: 99.0770% ( 4) 00:08:30.435 12250.191 - 12300.603: 99.0992% ( 4) 00:08:30.435 12300.603 - 12351.015: 99.1214% ( 4) 00:08:30.435 12351.015 - 12401.428: 99.1381% ( 3) 00:08:30.435 12401.428 - 12451.840: 99.1604% ( 4) 00:08:30.435 12451.840 - 12502.252: 99.1826% ( 4) 00:08:30.435 12502.252 - 12552.665: 99.1993% ( 3) 00:08:30.435 12552.665 - 12603.077: 99.2160% ( 3) 00:08:30.435 12603.077 - 12653.489: 99.2382% ( 4) 00:08:30.435 12653.489 - 12703.902: 99.2549% ( 3) 00:08:30.435 12703.902 - 12754.314: 99.2716% ( 3) 00:08:30.436 12754.314 - 12804.726: 99.2883% ( 3) 00:08:30.436 21072.345 - 21173.169: 99.2994% ( 2) 00:08:30.436 21173.169 - 21273.994: 99.3216% ( 4) 00:08:30.436 21273.994 - 21374.818: 99.3439% ( 4) 00:08:30.436 21374.818 - 21475.643: 99.3661% ( 4) 00:08:30.436 21475.643 - 21576.468: 99.3883% ( 4) 00:08:30.436 21576.468 - 21677.292: 99.4106% ( 4) 00:08:30.436 21677.292 - 21778.117: 99.4384% ( 5) 00:08:30.436 21778.117 - 21878.942: 99.4606% ( 4) 00:08:30.436 21878.942 - 21979.766: 99.4829% ( 4) 00:08:30.436 21979.766 - 22080.591: 99.5051% ( 4) 00:08:30.436 22080.591 - 22181.415: 99.5274% ( 4) 00:08:30.436 22181.415 - 22282.240: 99.5552% ( 5) 00:08:30.436 22282.240 - 22383.065: 99.5774% ( 4) 00:08:30.436 22383.065 - 22483.889: 99.5996% ( 4) 00:08:30.436 22483.889 - 22584.714: 99.6219% ( 4) 00:08:30.436 22584.714 - 22685.538: 99.6441% ( 4) 00:08:30.436 25609.452 - 25710.277: 99.6497% ( 1) 00:08:30.436 25710.277 - 25811.102: 99.6719% ( 4) 00:08:30.436 25811.102 - 26012.751: 99.7220% ( 9) 00:08:30.436 26012.751 - 26214.400: 99.7665% ( 8) 00:08:30.436 26214.400 - 26416.049: 99.8165% ( 9) 00:08:30.436 26416.049 - 26617.698: 99.8610% ( 8) 00:08:30.436 26617.698 - 26819.348: 99.9110% ( 9) 00:08:30.436 26819.348 - 27020.997: 99.9555% ( 8) 00:08:30.436 27020.997 - 27222.646: 100.0000% ( 8) 00:08:30.436 00:08:30.436 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:30.436 ============================================================================== 00:08:30.436 Range in us Cumulative IO count 00:08:30.436 5595.766 - 5620.972: 0.0222% ( 4) 00:08:30.436 5620.972 - 5646.178: 0.0723% ( 9) 00:08:30.436 5646.178 - 5671.385: 0.1779% ( 19) 00:08:30.436 5671.385 - 5696.591: 0.3726% ( 35) 00:08:30.436 5696.591 - 5721.797: 0.6283% ( 46) 00:08:30.436 5721.797 - 5747.003: 0.9508% ( 58) 00:08:30.436 5747.003 - 5772.209: 1.4457% ( 89) 00:08:30.436 5772.209 - 5797.415: 2.0852% ( 115) 00:08:30.436 5797.415 - 5822.622: 2.8470% ( 137) 00:08:30.436 5822.622 - 5847.828: 3.7367% ( 160) 00:08:30.436 5847.828 - 5873.034: 4.7320% ( 179) 00:08:30.436 5873.034 - 5898.240: 5.8608% ( 203) 00:08:30.436 5898.240 - 5923.446: 7.1842% ( 238) 00:08:30.436 5923.446 - 5948.652: 8.5409% ( 244) 00:08:30.436 5948.652 - 5973.858: 9.9644% ( 256) 00:08:30.436 5973.858 - 5999.065: 11.4268% ( 263) 00:08:30.436 5999.065 - 6024.271: 12.9059% ( 266) 00:08:30.436 6024.271 - 6049.477: 14.4517% ( 278) 00:08:30.436 6049.477 - 6074.683: 16.1254% ( 301) 00:08:30.436 6074.683 - 6099.889: 17.9104% ( 321) 00:08:30.436 6099.889 - 6125.095: 19.6452% ( 312) 00:08:30.436 6125.095 - 6150.302: 21.4079% ( 317) 00:08:30.436 6150.302 - 6175.508: 23.1928% ( 321) 00:08:30.436 6175.508 - 6200.714: 25.0389% ( 332) 00:08:30.436 6200.714 - 6225.920: 26.9017% ( 335) 00:08:30.436 6225.920 - 6251.126: 28.7200% ( 327) 00:08:30.436 6251.126 - 6276.332: 30.6217% ( 342) 00:08:30.436 6276.332 - 6301.538: 32.5122% ( 340) 00:08:30.436 6301.538 - 6326.745: 34.4695% ( 352) 00:08:30.436 6326.745 - 6351.951: 36.3768% ( 343) 00:08:30.436 6351.951 - 6377.157: 38.3174% ( 349) 00:08:30.436 6377.157 - 6402.363: 40.2524% ( 348) 00:08:30.436 6402.363 - 6427.569: 42.1986% ( 350) 00:08:30.436 6427.569 - 6452.775: 44.2060% ( 361) 00:08:30.436 6452.775 - 6503.188: 48.0371% ( 689) 00:08:30.436 6503.188 - 6553.600: 51.6070% ( 642) 00:08:30.436 6553.600 - 6604.012: 55.0156% ( 613) 00:08:30.436 6604.012 - 6654.425: 58.1294% ( 560) 00:08:30.436 6654.425 - 6704.837: 60.7095% ( 464) 00:08:30.436 6704.837 - 6755.249: 62.8114% ( 378) 00:08:30.436 6755.249 - 6805.662: 64.7020% ( 340) 00:08:30.436 6805.662 - 6856.074: 66.4257% ( 310) 00:08:30.436 6856.074 - 6906.486: 67.9493% ( 274) 00:08:30.436 6906.486 - 6956.898: 69.2338% ( 231) 00:08:30.436 6956.898 - 7007.311: 70.3903% ( 208) 00:08:30.436 7007.311 - 7057.723: 71.3190% ( 167) 00:08:30.436 7057.723 - 7108.135: 72.1697% ( 153) 00:08:30.436 7108.135 - 7158.548: 73.0038% ( 150) 00:08:30.436 7158.548 - 7208.960: 73.6655% ( 119) 00:08:30.436 7208.960 - 7259.372: 74.3272% ( 119) 00:08:30.436 7259.372 - 7309.785: 74.9611% ( 114) 00:08:30.436 7309.785 - 7360.197: 75.6283% ( 120) 00:08:30.436 7360.197 - 7410.609: 76.2734% ( 116) 00:08:30.436 7410.609 - 7461.022: 76.9184% ( 116) 00:08:30.436 7461.022 - 7511.434: 77.5690% ( 117) 00:08:30.436 7511.434 - 7561.846: 78.2807% ( 128) 00:08:30.436 7561.846 - 7612.258: 78.8757% ( 107) 00:08:30.436 7612.258 - 7662.671: 79.3706% ( 89) 00:08:30.436 7662.671 - 7713.083: 79.8543% ( 87) 00:08:30.436 7713.083 - 7763.495: 80.3548% ( 90) 00:08:30.436 7763.495 - 7813.908: 80.8163% ( 83) 00:08:30.436 7813.908 - 7864.320: 81.2333% ( 75) 00:08:30.436 7864.320 - 7914.732: 81.6615% ( 77) 00:08:30.436 7914.732 - 7965.145: 82.1842% ( 94) 00:08:30.436 7965.145 - 8015.557: 82.7124% ( 95) 00:08:30.436 8015.557 - 8065.969: 83.1962% ( 87) 00:08:30.436 8065.969 - 8116.382: 83.6911% ( 89) 00:08:30.436 8116.382 - 8166.794: 84.1971% ( 91) 00:08:30.436 8166.794 - 8217.206: 84.7253% ( 95) 00:08:30.436 8217.206 - 8267.618: 85.2424% ( 93) 00:08:30.436 8267.618 - 8318.031: 85.7651% ( 94) 00:08:30.436 8318.031 - 8368.443: 86.2823% ( 93) 00:08:30.436 8368.443 - 8418.855: 86.7938% ( 92) 00:08:30.436 8418.855 - 8469.268: 87.2387% ( 80) 00:08:30.436 8469.268 - 8519.680: 87.6724% ( 78) 00:08:30.436 8519.680 - 8570.092: 88.1506% ( 86) 00:08:30.436 8570.092 - 8620.505: 88.5454% ( 71) 00:08:30.436 8620.505 - 8670.917: 88.8734% ( 59) 00:08:30.436 8670.917 - 8721.329: 89.1960% ( 58) 00:08:30.436 8721.329 - 8771.742: 89.4907% ( 53) 00:08:30.436 8771.742 - 8822.154: 89.7464% ( 46) 00:08:30.436 8822.154 - 8872.566: 90.0078% ( 47) 00:08:30.436 8872.566 - 8922.978: 90.2636% ( 46) 00:08:30.436 8922.978 - 8973.391: 90.5416% ( 50) 00:08:30.436 8973.391 - 9023.803: 90.8141% ( 49) 00:08:30.436 9023.803 - 9074.215: 91.0976% ( 51) 00:08:30.436 9074.215 - 9124.628: 91.3201% ( 40) 00:08:30.436 9124.628 - 9175.040: 91.5981% ( 50) 00:08:30.436 9175.040 - 9225.452: 91.8539% ( 46) 00:08:30.436 9225.452 - 9275.865: 92.0985% ( 44) 00:08:30.436 9275.865 - 9326.277: 92.3376% ( 43) 00:08:30.436 9326.277 - 9376.689: 92.5934% ( 46) 00:08:30.436 9376.689 - 9427.102: 92.8158% ( 40) 00:08:30.436 9427.102 - 9477.514: 93.0271% ( 38) 00:08:30.436 9477.514 - 9527.926: 93.2384% ( 38) 00:08:30.436 9527.926 - 9578.338: 93.4497% ( 38) 00:08:30.436 9578.338 - 9628.751: 93.6444% ( 35) 00:08:30.436 9628.751 - 9679.163: 93.8334% ( 34) 00:08:30.436 9679.163 - 9729.575: 94.0336% ( 36) 00:08:30.436 9729.575 - 9779.988: 94.2226% ( 34) 00:08:30.436 9779.988 - 9830.400: 94.4395% ( 39) 00:08:30.436 9830.400 - 9880.812: 94.6564% ( 39) 00:08:30.436 9880.812 - 9931.225: 94.8621% ( 37) 00:08:30.436 9931.225 - 9981.637: 95.0345% ( 31) 00:08:30.436 9981.637 - 10032.049: 95.2180% ( 33) 00:08:30.436 10032.049 - 10082.462: 95.4181% ( 36) 00:08:30.436 10082.462 - 10132.874: 95.6072% ( 34) 00:08:30.436 10132.874 - 10183.286: 95.7629% ( 28) 00:08:30.436 10183.286 - 10233.698: 95.9075% ( 26) 00:08:30.436 10233.698 - 10284.111: 96.0354% ( 23) 00:08:30.436 10284.111 - 10334.523: 96.1521% ( 21) 00:08:30.436 10334.523 - 10384.935: 96.2689% ( 21) 00:08:30.436 10384.935 - 10435.348: 96.3968% ( 23) 00:08:30.436 10435.348 - 10485.760: 96.5080% ( 20) 00:08:30.436 10485.760 - 10536.172: 96.6137% ( 19) 00:08:30.436 10536.172 - 10586.585: 96.7304% ( 21) 00:08:30.436 10586.585 - 10636.997: 96.8472% ( 21) 00:08:30.436 10636.997 - 10687.409: 96.9862% ( 25) 00:08:30.436 10687.409 - 10737.822: 97.1363% ( 27) 00:08:30.436 10737.822 - 10788.234: 97.2976% ( 29) 00:08:30.436 10788.234 - 10838.646: 97.4477% ( 27) 00:08:30.436 10838.646 - 10889.058: 97.5812% ( 24) 00:08:30.436 10889.058 - 10939.471: 97.6980% ( 21) 00:08:30.436 10939.471 - 10989.883: 97.8425% ( 26) 00:08:30.436 10989.883 - 11040.295: 97.9760% ( 24) 00:08:30.437 11040.295 - 11090.708: 98.0816% ( 19) 00:08:30.437 11090.708 - 11141.120: 98.1817% ( 18) 00:08:30.437 11141.120 - 11191.532: 98.2651% ( 15) 00:08:30.437 11191.532 - 11241.945: 98.3319% ( 12) 00:08:30.437 11241.945 - 11292.357: 98.4041% ( 13) 00:08:30.437 11292.357 - 11342.769: 98.4709% ( 12) 00:08:30.437 11342.769 - 11393.182: 98.5431% ( 13) 00:08:30.437 11393.182 - 11443.594: 98.6043% ( 11) 00:08:30.437 11443.594 - 11494.006: 98.6766% ( 13) 00:08:30.437 11494.006 - 11544.418: 98.7266% ( 9) 00:08:30.437 11544.418 - 11594.831: 98.7823% ( 10) 00:08:30.437 11594.831 - 11645.243: 98.8156% ( 6) 00:08:30.437 11645.243 - 11695.655: 98.8490% ( 6) 00:08:30.437 11695.655 - 11746.068: 98.8712% ( 4) 00:08:30.437 11746.068 - 11796.480: 98.8990% ( 5) 00:08:30.437 11796.480 - 11846.892: 98.9435% ( 8) 00:08:30.437 11846.892 - 11897.305: 98.9769% ( 6) 00:08:30.437 11897.305 - 11947.717: 98.9991% ( 4) 00:08:30.437 11947.717 - 11998.129: 99.0214% ( 4) 00:08:30.437 11998.129 - 12048.542: 99.0380% ( 3) 00:08:30.437 12048.542 - 12098.954: 99.0603% ( 4) 00:08:30.437 12098.954 - 12149.366: 99.0770% ( 3) 00:08:30.437 12149.366 - 12199.778: 99.0992% ( 4) 00:08:30.437 12199.778 - 12250.191: 99.1159% ( 3) 00:08:30.437 12250.191 - 12300.603: 99.1381% ( 4) 00:08:30.437 12300.603 - 12351.015: 99.1548% ( 3) 00:08:30.437 12351.015 - 12401.428: 99.1770% ( 4) 00:08:30.437 12401.428 - 12451.840: 99.1937% ( 3) 00:08:30.437 12451.840 - 12502.252: 99.2104% ( 3) 00:08:30.437 12502.252 - 12552.665: 99.2327% ( 4) 00:08:30.437 12552.665 - 12603.077: 99.2493% ( 3) 00:08:30.437 12603.077 - 12653.489: 99.2660% ( 3) 00:08:30.437 12653.489 - 12703.902: 99.2883% ( 4) 00:08:30.437 19358.326 - 19459.151: 99.3105% ( 4) 00:08:30.437 19459.151 - 19559.975: 99.3327% ( 4) 00:08:30.437 19559.975 - 19660.800: 99.3550% ( 4) 00:08:30.437 19660.800 - 19761.625: 99.3772% ( 4) 00:08:30.437 19761.625 - 19862.449: 99.3995% ( 4) 00:08:30.437 19862.449 - 19963.274: 99.4217% ( 4) 00:08:30.437 19963.274 - 20064.098: 99.4495% ( 5) 00:08:30.437 20064.098 - 20164.923: 99.4718% ( 4) 00:08:30.437 20164.923 - 20265.748: 99.4940% ( 4) 00:08:30.437 20265.748 - 20366.572: 99.5162% ( 4) 00:08:30.437 20366.572 - 20467.397: 99.5385% ( 4) 00:08:30.437 20467.397 - 20568.222: 99.5607% ( 4) 00:08:30.437 20568.222 - 20669.046: 99.5885% ( 5) 00:08:30.437 20669.046 - 20769.871: 99.6108% ( 4) 00:08:30.437 20769.871 - 20870.695: 99.6330% ( 4) 00:08:30.437 20870.695 - 20971.520: 99.6441% ( 2) 00:08:30.437 23895.434 - 23996.258: 99.6608% ( 3) 00:08:30.437 23996.258 - 24097.083: 99.6775% ( 3) 00:08:30.437 24097.083 - 24197.908: 99.7053% ( 5) 00:08:30.437 24197.908 - 24298.732: 99.7275% ( 4) 00:08:30.437 24298.732 - 24399.557: 99.7498% ( 4) 00:08:30.437 24399.557 - 24500.382: 99.7720% ( 4) 00:08:30.437 24500.382 - 24601.206: 99.7943% ( 4) 00:08:30.437 24601.206 - 24702.031: 99.8165% ( 4) 00:08:30.437 24702.031 - 24802.855: 99.8387% ( 4) 00:08:30.437 24802.855 - 24903.680: 99.8610% ( 4) 00:08:30.437 24903.680 - 25004.505: 99.8888% ( 5) 00:08:30.437 25004.505 - 25105.329: 99.9110% ( 4) 00:08:30.437 25105.329 - 25206.154: 99.9333% ( 4) 00:08:30.437 25206.154 - 25306.978: 99.9555% ( 4) 00:08:30.437 25306.978 - 25407.803: 99.9833% ( 5) 00:08:30.437 25407.803 - 25508.628: 100.0000% ( 3) 00:08:30.437 00:08:30.437 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:30.437 ============================================================================== 00:08:30.437 Range in us Cumulative IO count 00:08:30.437 5595.766 - 5620.972: 0.0332% ( 6) 00:08:30.437 5620.972 - 5646.178: 0.0720% ( 7) 00:08:30.437 5646.178 - 5671.385: 0.2050% ( 24) 00:08:30.437 5671.385 - 5696.591: 0.3879% ( 33) 00:08:30.437 5696.591 - 5721.797: 0.6704% ( 51) 00:08:30.437 5721.797 - 5747.003: 1.0860% ( 75) 00:08:30.437 5747.003 - 5772.209: 1.4905% ( 73) 00:08:30.437 5772.209 - 5797.415: 2.0279% ( 97) 00:08:30.437 5797.415 - 5822.622: 2.7981% ( 139) 00:08:30.437 5822.622 - 5847.828: 3.7733% ( 176) 00:08:30.437 5847.828 - 5873.034: 4.6764% ( 163) 00:08:30.437 5873.034 - 5898.240: 5.8621% ( 214) 00:08:30.437 5898.240 - 5923.446: 7.1365% ( 230) 00:08:30.437 5923.446 - 5948.652: 8.4164% ( 231) 00:08:30.437 5948.652 - 5973.858: 9.7019% ( 232) 00:08:30.437 5973.858 - 5999.065: 11.1148% ( 255) 00:08:30.437 5999.065 - 6024.271: 12.6718% ( 281) 00:08:30.437 6024.271 - 6049.477: 14.2066% ( 277) 00:08:30.437 6049.477 - 6074.683: 15.8078% ( 289) 00:08:30.437 6074.683 - 6099.889: 17.5698% ( 318) 00:08:30.437 6099.889 - 6125.095: 19.4371% ( 337) 00:08:30.437 6125.095 - 6150.302: 21.1990% ( 318) 00:08:30.437 6150.302 - 6175.508: 22.9665% ( 319) 00:08:30.437 6175.508 - 6200.714: 24.7174% ( 316) 00:08:30.437 6200.714 - 6225.920: 26.5237% ( 326) 00:08:30.437 6225.920 - 6251.126: 28.3355% ( 327) 00:08:30.437 6251.126 - 6276.332: 30.1806% ( 333) 00:08:30.437 6276.332 - 6301.538: 32.0091% ( 330) 00:08:30.437 6301.538 - 6326.745: 34.0093% ( 361) 00:08:30.437 6326.745 - 6351.951: 35.9098% ( 343) 00:08:30.437 6351.951 - 6377.157: 37.8214% ( 345) 00:08:30.437 6377.157 - 6402.363: 39.7883% ( 355) 00:08:30.437 6402.363 - 6427.569: 41.7719% ( 358) 00:08:30.437 6427.569 - 6452.775: 43.7777% ( 362) 00:08:30.437 6452.775 - 6503.188: 47.6618% ( 701) 00:08:30.437 6503.188 - 6553.600: 51.2965% ( 656) 00:08:30.437 6553.600 - 6604.012: 54.7318% ( 620) 00:08:30.437 6604.012 - 6654.425: 57.7349% ( 542) 00:08:30.437 6654.425 - 6704.837: 60.3280% ( 468) 00:08:30.437 6704.837 - 6755.249: 62.4280% ( 379) 00:08:30.437 6755.249 - 6805.662: 64.3395% ( 345) 00:08:30.437 6805.662 - 6856.074: 65.9962% ( 299) 00:08:30.437 6856.074 - 6906.486: 67.4867% ( 269) 00:08:30.437 6906.486 - 6956.898: 68.7722% ( 232) 00:08:30.437 6956.898 - 7007.311: 70.0188% ( 225) 00:08:30.437 7007.311 - 7057.723: 71.0328% ( 183) 00:08:30.437 7057.723 - 7108.135: 71.9526% ( 166) 00:08:30.437 7108.135 - 7158.548: 72.8336% ( 159) 00:08:30.437 7158.548 - 7208.960: 73.5483% ( 129) 00:08:30.437 7208.960 - 7259.372: 74.2188% ( 121) 00:08:30.437 7259.372 - 7309.785: 74.8227% ( 109) 00:08:30.437 7309.785 - 7360.197: 75.4266% ( 109) 00:08:30.437 7360.197 - 7410.609: 76.2079% ( 141) 00:08:30.437 7410.609 - 7461.022: 76.9836% ( 140) 00:08:30.437 7461.022 - 7511.434: 77.6263% ( 116) 00:08:30.437 7511.434 - 7561.846: 78.1859% ( 101) 00:08:30.437 7561.846 - 7612.258: 78.7954% ( 110) 00:08:30.437 7612.258 - 7662.671: 79.3883% ( 107) 00:08:30.437 7662.671 - 7713.083: 80.0033% ( 111) 00:08:30.437 7713.083 - 7763.495: 80.6516% ( 117) 00:08:30.437 7763.495 - 7813.908: 81.2500% ( 108) 00:08:30.437 7813.908 - 7864.320: 81.7210% ( 85) 00:08:30.437 7864.320 - 7914.732: 82.1310% ( 74) 00:08:30.437 7914.732 - 7965.145: 82.5964% ( 84) 00:08:30.437 7965.145 - 8015.557: 83.0452% ( 81) 00:08:30.437 8015.557 - 8065.969: 83.4885% ( 80) 00:08:30.437 8065.969 - 8116.382: 83.9539% ( 84) 00:08:30.437 8116.382 - 8166.794: 84.3916% ( 79) 00:08:30.437 8166.794 - 8217.206: 84.8016% ( 74) 00:08:30.437 8217.206 - 8267.618: 85.2283% ( 77) 00:08:30.437 8267.618 - 8318.031: 85.6272% ( 72) 00:08:30.437 8318.031 - 8368.443: 86.0871% ( 83) 00:08:30.437 8368.443 - 8418.855: 86.5747% ( 88) 00:08:30.437 8418.855 - 8469.268: 86.9958% ( 76) 00:08:30.437 8469.268 - 8519.680: 87.4058% ( 74) 00:08:30.437 8519.680 - 8570.092: 87.7770% ( 67) 00:08:30.437 8570.092 - 8620.505: 88.0873% ( 56) 00:08:30.437 8620.505 - 8670.917: 88.4475% ( 65) 00:08:30.437 8670.917 - 8721.329: 88.7467% ( 54) 00:08:30.437 8721.329 - 8771.742: 89.1013% ( 64) 00:08:30.437 8771.742 - 8822.154: 89.4337% ( 60) 00:08:30.437 8822.154 - 8872.566: 89.7496% ( 57) 00:08:30.437 8872.566 - 8922.978: 90.1042% ( 64) 00:08:30.437 8922.978 - 8973.391: 90.4809% ( 68) 00:08:30.437 8973.391 - 9023.803: 90.7912% ( 56) 00:08:30.437 9023.803 - 9074.215: 91.0738% ( 51) 00:08:30.437 9074.215 - 9124.628: 91.3564% ( 51) 00:08:30.437 9124.628 - 9175.040: 91.6223% ( 48) 00:08:30.437 9175.040 - 9225.452: 91.9105% ( 52) 00:08:30.437 9225.452 - 9275.865: 92.1653% ( 46) 00:08:30.437 9275.865 - 9326.277: 92.4368% ( 49) 00:08:30.437 9326.277 - 9376.689: 92.6862% ( 45) 00:08:30.437 9376.689 - 9427.102: 92.9300% ( 44) 00:08:30.437 9427.102 - 9477.514: 93.1571% ( 41) 00:08:30.437 9477.514 - 9527.926: 93.3677% ( 38) 00:08:30.437 9527.926 - 9578.338: 93.5838% ( 39) 00:08:30.437 9578.338 - 9628.751: 93.7943% ( 38) 00:08:30.437 9628.751 - 9679.163: 93.9883% ( 35) 00:08:30.437 9679.163 - 9729.575: 94.1988% ( 38) 00:08:30.437 9729.575 - 9779.988: 94.4204% ( 40) 00:08:30.437 9779.988 - 9830.400: 94.6310% ( 38) 00:08:30.437 9830.400 - 9880.812: 94.8305% ( 36) 00:08:30.437 9880.812 - 9931.225: 95.0299% ( 36) 00:08:30.437 9931.225 - 9981.637: 95.2238% ( 35) 00:08:30.437 9981.637 - 10032.049: 95.3790% ( 28) 00:08:30.437 10032.049 - 10082.462: 95.5618% ( 33) 00:08:30.437 10082.462 - 10132.874: 95.7668% ( 37) 00:08:30.437 10132.874 - 10183.286: 95.9441% ( 32) 00:08:30.437 10183.286 - 10233.698: 96.1325% ( 34) 00:08:30.437 10233.698 - 10284.111: 96.2711% ( 25) 00:08:30.437 10284.111 - 10334.523: 96.4207% ( 27) 00:08:30.437 10334.523 - 10384.935: 96.5536% ( 24) 00:08:30.437 10384.935 - 10435.348: 96.6811% ( 23) 00:08:30.437 10435.348 - 10485.760: 96.8030% ( 22) 00:08:30.437 10485.760 - 10536.172: 96.9249% ( 22) 00:08:30.437 10536.172 - 10586.585: 97.0468% ( 22) 00:08:30.437 10586.585 - 10636.997: 97.1687% ( 22) 00:08:30.437 10636.997 - 10687.409: 97.2961% ( 23) 00:08:30.438 10687.409 - 10737.822: 97.3958% ( 18) 00:08:30.438 10737.822 - 10788.234: 97.4900% ( 17) 00:08:30.438 10788.234 - 10838.646: 97.5898% ( 18) 00:08:30.438 10838.646 - 10889.058: 97.6895% ( 18) 00:08:30.438 10889.058 - 10939.471: 97.7837% ( 17) 00:08:30.438 10939.471 - 10989.883: 97.8834% ( 18) 00:08:30.438 10989.883 - 11040.295: 97.9665% ( 15) 00:08:30.438 11040.295 - 11090.708: 98.0386% ( 13) 00:08:30.438 11090.708 - 11141.120: 98.1272% ( 16) 00:08:30.438 11141.120 - 11191.532: 98.2103% ( 15) 00:08:30.438 11191.532 - 11241.945: 98.2713% ( 11) 00:08:30.438 11241.945 - 11292.357: 98.3378% ( 12) 00:08:30.438 11292.357 - 11342.769: 98.4098% ( 13) 00:08:30.438 11342.769 - 11393.182: 98.4818% ( 13) 00:08:30.438 11393.182 - 11443.594: 98.5483% ( 12) 00:08:30.438 11443.594 - 11494.006: 98.6314% ( 15) 00:08:30.438 11494.006 - 11544.418: 98.7035% ( 13) 00:08:30.438 11544.418 - 11594.831: 98.7699% ( 12) 00:08:30.438 11594.831 - 11645.243: 98.8032% ( 6) 00:08:30.438 11645.243 - 11695.655: 98.8309% ( 5) 00:08:30.438 11695.655 - 11746.068: 98.8697% ( 7) 00:08:30.438 11746.068 - 11796.480: 98.9029% ( 6) 00:08:30.438 11796.480 - 11846.892: 98.9306% ( 5) 00:08:30.438 11846.892 - 11897.305: 98.9639% ( 6) 00:08:30.438 11897.305 - 11947.717: 98.9971% ( 6) 00:08:30.438 11947.717 - 11998.129: 99.0248% ( 5) 00:08:30.438 11998.129 - 12048.542: 99.0581% ( 6) 00:08:30.438 12048.542 - 12098.954: 99.0802% ( 4) 00:08:30.438 12098.954 - 12149.366: 99.1135% ( 6) 00:08:30.438 12149.366 - 12199.778: 99.1412% ( 5) 00:08:30.438 12199.778 - 12250.191: 99.1744% ( 6) 00:08:30.438 12250.191 - 12300.603: 99.2077% ( 6) 00:08:30.438 12300.603 - 12351.015: 99.2188% ( 2) 00:08:30.438 12351.015 - 12401.428: 99.2298% ( 2) 00:08:30.438 12401.428 - 12451.840: 99.2465% ( 3) 00:08:30.438 12451.840 - 12502.252: 99.2575% ( 2) 00:08:30.438 12502.252 - 12552.665: 99.2686% ( 2) 00:08:30.438 12552.665 - 12603.077: 99.2852% ( 3) 00:08:30.438 12603.077 - 12653.489: 99.2908% ( 1) 00:08:30.438 13913.797 - 14014.622: 99.3074% ( 3) 00:08:30.438 14014.622 - 14115.446: 99.3296% ( 4) 00:08:30.438 14115.446 - 14216.271: 99.3573% ( 5) 00:08:30.438 14216.271 - 14317.095: 99.3794% ( 4) 00:08:30.438 14317.095 - 14417.920: 99.4016% ( 4) 00:08:30.438 14417.920 - 14518.745: 99.4238% ( 4) 00:08:30.438 14518.745 - 14619.569: 99.4404% ( 3) 00:08:30.438 14619.569 - 14720.394: 99.4681% ( 5) 00:08:30.438 14720.394 - 14821.218: 99.4902% ( 4) 00:08:30.438 14821.218 - 14922.043: 99.5124% ( 4) 00:08:30.438 14922.043 - 15022.868: 99.5346% ( 4) 00:08:30.438 15022.868 - 15123.692: 99.5623% ( 5) 00:08:30.438 15123.692 - 15224.517: 99.5844% ( 4) 00:08:30.438 15224.517 - 15325.342: 99.6066% ( 4) 00:08:30.438 15325.342 - 15426.166: 99.6288% ( 4) 00:08:30.438 15426.166 - 15526.991: 99.6454% ( 3) 00:08:30.438 18753.378 - 18854.203: 99.6620% ( 3) 00:08:30.438 18854.203 - 18955.028: 99.6897% ( 5) 00:08:30.438 18955.028 - 19055.852: 99.7119% ( 4) 00:08:30.438 19055.852 - 19156.677: 99.7340% ( 4) 00:08:30.438 19156.677 - 19257.502: 99.7562% ( 4) 00:08:30.438 19257.502 - 19358.326: 99.7784% ( 4) 00:08:30.438 19358.326 - 19459.151: 99.8061% ( 5) 00:08:30.438 19459.151 - 19559.975: 99.8282% ( 4) 00:08:30.438 19559.975 - 19660.800: 99.8504% ( 4) 00:08:30.438 19660.800 - 19761.625: 99.8726% ( 4) 00:08:30.438 19761.625 - 19862.449: 99.8947% ( 4) 00:08:30.438 19862.449 - 19963.274: 99.9224% ( 5) 00:08:30.438 19963.274 - 20064.098: 99.9446% ( 4) 00:08:30.438 20064.098 - 20164.923: 99.9668% ( 4) 00:08:30.438 20164.923 - 20265.748: 99.9889% ( 4) 00:08:30.438 20265.748 - 20366.572: 100.0000% ( 2) 00:08:30.438 00:08:30.438 15:56:28 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:31.819 Initializing NVMe Controllers 00:08:31.819 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.819 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.819 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.819 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.819 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.819 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.819 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.819 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.819 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.819 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.819 Initialization complete. Launching workers. 00:08:31.819 ======================================================== 00:08:31.819 Latency(us) 00:08:31.819 Device Information : IOPS MiB/s Average min max 00:08:31.819 PCIE (0000:00:10.0) NSID 1 from core 0: 17183.57 201.37 7458.68 5918.94 31313.29 00:08:31.819 PCIE (0000:00:11.0) NSID 1 from core 0: 17183.57 201.37 7447.28 5908.10 29775.97 00:08:31.819 PCIE (0000:00:13.0) NSID 1 from core 0: 17183.57 201.37 7435.54 6077.01 28241.36 00:08:31.819 PCIE (0000:00:12.0) NSID 1 from core 0: 17183.57 201.37 7423.83 6052.09 26424.58 00:08:31.819 PCIE (0000:00:12.0) NSID 2 from core 0: 17183.57 201.37 7411.99 6090.36 24705.74 00:08:31.819 PCIE (0000:00:12.0) NSID 3 from core 0: 17183.57 201.37 7400.41 6083.28 23263.17 00:08:31.819 ======================================================== 00:08:31.819 Total : 103101.45 1208.22 7429.62 5908.10 31313.29 00:08:31.819 00:08:31.819 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:31.819 ================================================================================= 00:08:31.819 1.00000% : 6200.714us 00:08:31.819 10.00000% : 6604.012us 00:08:31.819 25.00000% : 6805.662us 00:08:31.819 50.00000% : 7158.548us 00:08:31.819 75.00000% : 7612.258us 00:08:31.819 90.00000% : 8368.443us 00:08:31.819 95.00000% : 9124.628us 00:08:31.819 98.00000% : 10284.111us 00:08:31.819 99.00000% : 11040.295us 00:08:31.819 99.50000% : 23492.135us 00:08:31.819 99.90000% : 30852.332us 00:08:31.819 99.99000% : 31457.280us 00:08:31.819 99.99900% : 31457.280us 00:08:31.819 99.99990% : 31457.280us 00:08:31.819 99.99999% : 31457.280us 00:08:31.819 00:08:31.819 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:31.819 ================================================================================= 00:08:31.819 1.00000% : 6326.745us 00:08:31.819 10.00000% : 6654.425us 00:08:31.819 25.00000% : 6856.074us 00:08:31.819 50.00000% : 7108.135us 00:08:31.819 75.00000% : 7561.846us 00:08:31.819 90.00000% : 8318.031us 00:08:31.819 95.00000% : 9124.628us 00:08:31.819 98.00000% : 10284.111us 00:08:31.819 99.00000% : 10838.646us 00:08:31.819 99.50000% : 22887.188us 00:08:31.819 99.90000% : 29440.788us 00:08:31.819 99.99000% : 29844.086us 00:08:31.819 99.99900% : 29844.086us 00:08:31.819 99.99990% : 29844.086us 00:08:31.819 99.99999% : 29844.086us 00:08:31.819 00:08:31.819 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:31.819 ================================================================================= 00:08:31.819 1.00000% : 6351.951us 00:08:31.819 10.00000% : 6654.425us 00:08:31.819 25.00000% : 6856.074us 00:08:31.819 50.00000% : 7108.135us 00:08:31.819 75.00000% : 7612.258us 00:08:31.819 90.00000% : 8368.443us 00:08:31.819 95.00000% : 9023.803us 00:08:31.820 98.00000% : 10384.935us 00:08:31.820 99.00000% : 11090.708us 00:08:31.820 99.50000% : 21576.468us 00:08:31.820 99.90000% : 27827.594us 00:08:31.820 99.99000% : 28230.892us 00:08:31.820 99.99900% : 28432.542us 00:08:31.820 99.99990% : 28432.542us 00:08:31.820 99.99999% : 28432.542us 00:08:31.820 00:08:31.820 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:31.820 ================================================================================= 00:08:31.820 1.00000% : 6377.157us 00:08:31.820 10.00000% : 6704.837us 00:08:31.820 25.00000% : 6856.074us 00:08:31.820 50.00000% : 7108.135us 00:08:31.820 75.00000% : 7612.258us 00:08:31.820 90.00000% : 8318.031us 00:08:31.820 95.00000% : 9023.803us 00:08:31.820 98.00000% : 10233.698us 00:08:31.820 99.00000% : 11292.357us 00:08:31.820 99.50000% : 20366.572us 00:08:31.820 99.90000% : 26012.751us 00:08:31.820 99.99000% : 26416.049us 00:08:31.820 99.99900% : 26617.698us 00:08:31.820 99.99990% : 26617.698us 00:08:31.820 99.99999% : 26617.698us 00:08:31.820 00:08:31.820 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:31.820 ================================================================================= 00:08:31.820 1.00000% : 6377.157us 00:08:31.820 10.00000% : 6654.425us 00:08:31.820 25.00000% : 6856.074us 00:08:31.820 50.00000% : 7108.135us 00:08:31.820 75.00000% : 7561.846us 00:08:31.820 90.00000% : 8318.031us 00:08:31.820 95.00000% : 9023.803us 00:08:31.820 98.00000% : 10132.874us 00:08:31.820 99.00000% : 11141.120us 00:08:31.820 99.50000% : 19156.677us 00:08:31.820 99.90000% : 24298.732us 00:08:31.820 99.99000% : 24702.031us 00:08:31.820 99.99900% : 24802.855us 00:08:31.820 99.99990% : 24802.855us 00:08:31.820 99.99999% : 24802.855us 00:08:31.820 00:08:31.820 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:31.820 ================================================================================= 00:08:31.820 1.00000% : 6377.157us 00:08:31.820 10.00000% : 6654.425us 00:08:31.820 25.00000% : 6856.074us 00:08:31.820 50.00000% : 7108.135us 00:08:31.820 75.00000% : 7612.258us 00:08:31.820 90.00000% : 8368.443us 00:08:31.820 95.00000% : 9023.803us 00:08:31.820 98.00000% : 10082.462us 00:08:31.820 99.00000% : 11090.708us 00:08:31.820 99.50000% : 17140.185us 00:08:31.820 99.90000% : 22887.188us 00:08:31.820 99.99000% : 23290.486us 00:08:31.820 99.99900% : 23290.486us 00:08:31.820 99.99990% : 23290.486us 00:08:31.820 99.99999% : 23290.486us 00:08:31.820 00:08:31.820 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:31.820 ============================================================================== 00:08:31.820 Range in us Cumulative IO count 00:08:31.820 5898.240 - 5923.446: 0.0058% ( 1) 00:08:31.820 5948.652 - 5973.858: 0.0116% ( 1) 00:08:31.820 5973.858 - 5999.065: 0.0232% ( 2) 00:08:31.820 5999.065 - 6024.271: 0.0407% ( 3) 00:08:31.820 6024.271 - 6049.477: 0.1220% ( 14) 00:08:31.820 6049.477 - 6074.683: 0.1975% ( 13) 00:08:31.820 6074.683 - 6099.889: 0.3834% ( 32) 00:08:31.820 6099.889 - 6125.095: 0.5286% ( 25) 00:08:31.820 6125.095 - 6150.302: 0.6506% ( 21) 00:08:31.820 6150.302 - 6175.508: 0.8132% ( 28) 00:08:31.820 6175.508 - 6200.714: 1.0165% ( 35) 00:08:31.820 6200.714 - 6225.920: 1.2140% ( 34) 00:08:31.820 6225.920 - 6251.126: 1.4173% ( 35) 00:08:31.820 6251.126 - 6276.332: 1.6554% ( 41) 00:08:31.820 6276.332 - 6301.538: 1.9284% ( 47) 00:08:31.820 6301.538 - 6326.745: 2.3989% ( 81) 00:08:31.820 6326.745 - 6351.951: 2.7533% ( 61) 00:08:31.820 6351.951 - 6377.157: 3.1192% ( 63) 00:08:31.820 6377.157 - 6402.363: 3.5084% ( 67) 00:08:31.820 6402.363 - 6427.569: 3.9266% ( 72) 00:08:31.820 6427.569 - 6452.775: 4.4784% ( 95) 00:08:31.820 6452.775 - 6503.188: 6.5172% ( 351) 00:08:31.820 6503.188 - 6553.600: 8.6431% ( 366) 00:08:31.820 6553.600 - 6604.012: 11.2628% ( 451) 00:08:31.820 6604.012 - 6654.425: 14.9977% ( 643) 00:08:31.820 6654.425 - 6704.837: 18.6919% ( 636) 00:08:31.820 6704.837 - 6755.249: 22.4907% ( 654) 00:08:31.820 6755.249 - 6805.662: 26.1908% ( 637) 00:08:31.820 6805.662 - 6856.074: 30.3148% ( 710) 00:08:31.820 6856.074 - 6906.486: 34.3576% ( 696) 00:08:31.820 6906.486 - 6956.898: 38.2435% ( 669) 00:08:31.820 6956.898 - 7007.311: 42.1236% ( 668) 00:08:31.820 7007.311 - 7057.723: 45.8411% ( 640) 00:08:31.820 7057.723 - 7108.135: 49.2449% ( 586) 00:08:31.820 7108.135 - 7158.548: 52.8346% ( 618) 00:08:31.820 7158.548 - 7208.960: 56.0118% ( 547) 00:08:31.820 7208.960 - 7259.372: 59.1020% ( 532) 00:08:31.820 7259.372 - 7309.785: 61.9714% ( 494) 00:08:31.820 7309.785 - 7360.197: 64.6085% ( 454) 00:08:31.820 7360.197 - 7410.609: 67.2165% ( 449) 00:08:31.820 7410.609 - 7461.022: 69.5632% ( 404) 00:08:31.820 7461.022 - 7511.434: 71.7414% ( 375) 00:08:31.820 7511.434 - 7561.846: 73.6815% ( 334) 00:08:31.820 7561.846 - 7612.258: 75.6564% ( 340) 00:08:31.820 7612.258 - 7662.671: 77.1434% ( 256) 00:08:31.820 7662.671 - 7713.083: 78.4851% ( 231) 00:08:31.820 7713.083 - 7763.495: 79.6643% ( 203) 00:08:31.820 7763.495 - 7813.908: 80.8434% ( 203) 00:08:31.820 7813.908 - 7864.320: 81.9703% ( 194) 00:08:31.820 7864.320 - 7914.732: 82.9229% ( 164) 00:08:31.820 7914.732 - 7965.145: 83.8987% ( 168) 00:08:31.820 7965.145 - 8015.557: 84.8397% ( 162) 00:08:31.820 8015.557 - 8065.969: 85.8329% ( 171) 00:08:31.820 8065.969 - 8116.382: 86.6810% ( 146) 00:08:31.820 8116.382 - 8166.794: 87.4013% ( 124) 00:08:31.820 8166.794 - 8217.206: 88.0867% ( 118) 00:08:31.820 8217.206 - 8267.618: 88.9057% ( 141) 00:08:31.820 8267.618 - 8318.031: 89.5911% ( 118) 00:08:31.820 8318.031 - 8368.443: 90.0964% ( 87) 00:08:31.820 8368.443 - 8418.855: 90.6424% ( 94) 00:08:31.820 8418.855 - 8469.268: 91.0374% ( 68) 00:08:31.820 8469.268 - 8519.680: 91.5195% ( 83) 00:08:31.820 8519.680 - 8570.092: 91.9319% ( 71) 00:08:31.820 8570.092 - 8620.505: 92.3734% ( 76) 00:08:31.820 8620.505 - 8670.917: 92.7103% ( 58) 00:08:31.820 8670.917 - 8721.329: 93.0123% ( 52) 00:08:31.820 8721.329 - 8771.742: 93.3086% ( 51) 00:08:31.820 8771.742 - 8822.154: 93.6687% ( 62) 00:08:31.820 8822.154 - 8872.566: 93.9998% ( 57) 00:08:31.820 8872.566 - 8922.978: 94.2670% ( 46) 00:08:31.820 8922.978 - 8973.391: 94.4935% ( 39) 00:08:31.820 8973.391 - 9023.803: 94.7665% ( 47) 00:08:31.820 9023.803 - 9074.215: 94.9175% ( 26) 00:08:31.820 9074.215 - 9124.628: 95.0802% ( 28) 00:08:31.820 9124.628 - 9175.040: 95.3474% ( 46) 00:08:31.820 9175.040 - 9225.452: 95.5332% ( 32) 00:08:31.820 9225.452 - 9275.865: 95.6959% ( 28) 00:08:31.820 9275.865 - 9326.277: 95.8295% ( 23) 00:08:31.820 9326.277 - 9376.689: 95.9166% ( 15) 00:08:31.820 9376.689 - 9427.102: 96.0908% ( 30) 00:08:31.820 9427.102 - 9477.514: 96.2012% ( 19) 00:08:31.820 9477.514 - 9527.926: 96.2883% ( 15) 00:08:31.820 9527.926 - 9578.338: 96.3987% ( 19) 00:08:31.820 9578.338 - 9628.751: 96.4916% ( 16) 00:08:31.820 9628.751 - 9679.163: 96.5788% ( 15) 00:08:31.820 9679.163 - 9729.575: 96.7007% ( 21) 00:08:31.820 9729.575 - 9779.988: 96.8227% ( 21) 00:08:31.820 9779.988 - 9830.400: 96.9215% ( 17) 00:08:31.820 9830.400 - 9880.812: 96.9854% ( 11) 00:08:31.820 9880.812 - 9931.225: 97.0841% ( 17) 00:08:31.820 9931.225 - 9981.637: 97.1596% ( 13) 00:08:31.820 9981.637 - 10032.049: 97.2758% ( 20) 00:08:31.820 10032.049 - 10082.462: 97.4326% ( 27) 00:08:31.820 10082.462 - 10132.874: 97.5836% ( 26) 00:08:31.820 10132.874 - 10183.286: 97.8044% ( 38) 00:08:31.820 10183.286 - 10233.698: 97.9322% ( 22) 00:08:31.820 10233.698 - 10284.111: 98.0541% ( 21) 00:08:31.820 10284.111 - 10334.523: 98.1180% ( 11) 00:08:31.820 10334.523 - 10384.935: 98.1819% ( 11) 00:08:31.820 10384.935 - 10435.348: 98.2284% ( 8) 00:08:31.820 10435.348 - 10485.760: 98.2749% ( 8) 00:08:31.820 10485.760 - 10536.172: 98.3329% ( 10) 00:08:31.820 10536.172 - 10586.585: 98.5479% ( 37) 00:08:31.820 10586.585 - 10636.997: 98.6292% ( 14) 00:08:31.820 10636.997 - 10687.409: 98.6640% ( 6) 00:08:31.820 10687.409 - 10737.822: 98.7163% ( 9) 00:08:31.820 10737.822 - 10788.234: 98.7512% ( 6) 00:08:31.820 10788.234 - 10838.646: 98.8325% ( 14) 00:08:31.820 10838.646 - 10889.058: 98.8673% ( 6) 00:08:31.820 10889.058 - 10939.471: 98.9196% ( 9) 00:08:31.820 10939.471 - 10989.883: 98.9661% ( 8) 00:08:31.820 10989.883 - 11040.295: 99.0009% ( 6) 00:08:31.820 11040.295 - 11090.708: 99.0358% ( 6) 00:08:31.820 11090.708 - 11141.120: 99.0822% ( 8) 00:08:31.820 11141.120 - 11191.532: 99.1113% ( 5) 00:08:31.820 11191.532 - 11241.945: 99.1520% ( 7) 00:08:31.820 11241.945 - 11292.357: 99.1810% ( 5) 00:08:31.820 11292.357 - 11342.769: 99.2158% ( 6) 00:08:31.820 11342.769 - 11393.182: 99.2333% ( 3) 00:08:31.820 11393.182 - 11443.594: 99.2391% ( 1) 00:08:31.820 11443.594 - 11494.006: 99.2449% ( 1) 00:08:31.820 11494.006 - 11544.418: 99.2507% ( 1) 00:08:31.820 11544.418 - 11594.831: 99.2565% ( 1) 00:08:31.820 21778.117 - 21878.942: 99.2623% ( 1) 00:08:31.820 21878.942 - 21979.766: 99.2739% ( 2) 00:08:31.820 21979.766 - 22080.591: 99.2914% ( 3) 00:08:31.820 22080.591 - 22181.415: 99.3088% ( 3) 00:08:31.820 22181.415 - 22282.240: 99.3262% ( 3) 00:08:31.820 22282.240 - 22383.065: 99.3436% ( 3) 00:08:31.820 22383.065 - 22483.889: 99.3553% ( 2) 00:08:31.820 22483.889 - 22584.714: 99.3727% ( 3) 00:08:31.820 22584.714 - 22685.538: 99.3901% ( 3) 00:08:31.820 22685.538 - 22786.363: 99.4017% ( 2) 00:08:31.820 22786.363 - 22887.188: 99.4191% ( 3) 00:08:31.821 22887.188 - 22988.012: 99.4366% ( 3) 00:08:31.821 22988.012 - 23088.837: 99.4482% ( 2) 00:08:31.821 23088.837 - 23189.662: 99.4656% ( 3) 00:08:31.821 23189.662 - 23290.486: 99.4772% ( 2) 00:08:31.821 23290.486 - 23391.311: 99.4947% ( 3) 00:08:31.821 23391.311 - 23492.135: 99.5121% ( 3) 00:08:31.821 23492.135 - 23592.960: 99.5295% ( 3) 00:08:31.821 23592.960 - 23693.785: 99.5469% ( 3) 00:08:31.821 23693.785 - 23794.609: 99.5644% ( 3) 00:08:31.821 23794.609 - 23895.434: 99.5818% ( 3) 00:08:31.821 23895.434 - 23996.258: 99.5992% ( 3) 00:08:31.821 23996.258 - 24097.083: 99.6166% ( 3) 00:08:31.821 24097.083 - 24197.908: 99.6283% ( 2) 00:08:31.821 29037.489 - 29239.138: 99.6399% ( 2) 00:08:31.821 29239.138 - 29440.788: 99.6747% ( 6) 00:08:31.821 29440.788 - 29642.437: 99.7096% ( 6) 00:08:31.821 29642.437 - 29844.086: 99.7444% ( 6) 00:08:31.821 29844.086 - 30045.735: 99.7793% ( 6) 00:08:31.821 30045.735 - 30247.385: 99.8083% ( 5) 00:08:31.821 30247.385 - 30449.034: 99.8432% ( 6) 00:08:31.821 30449.034 - 30650.683: 99.8780% ( 6) 00:08:31.821 30650.683 - 30852.332: 99.9129% ( 6) 00:08:31.821 30852.332 - 31053.982: 99.9535% ( 7) 00:08:31.821 31053.982 - 31255.631: 99.9884% ( 6) 00:08:31.821 31255.631 - 31457.280: 100.0000% ( 2) 00:08:31.821 00:08:31.821 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:31.821 ============================================================================== 00:08:31.821 Range in us Cumulative IO count 00:08:31.821 5898.240 - 5923.446: 0.0058% ( 1) 00:08:31.821 5923.446 - 5948.652: 0.0116% ( 1) 00:08:31.821 6024.271 - 6049.477: 0.0232% ( 2) 00:08:31.821 6074.683 - 6099.889: 0.0290% ( 1) 00:08:31.821 6125.095 - 6150.302: 0.0523% ( 4) 00:08:31.821 6150.302 - 6175.508: 0.0987% ( 8) 00:08:31.821 6175.508 - 6200.714: 0.1626% ( 11) 00:08:31.821 6200.714 - 6225.920: 0.2846% ( 21) 00:08:31.821 6225.920 - 6251.126: 0.4182% ( 23) 00:08:31.821 6251.126 - 6276.332: 0.6331% ( 37) 00:08:31.821 6276.332 - 6301.538: 0.8655% ( 40) 00:08:31.821 6301.538 - 6326.745: 1.0630% ( 34) 00:08:31.821 6326.745 - 6351.951: 1.2488% ( 32) 00:08:31.821 6351.951 - 6377.157: 1.5567% ( 53) 00:08:31.821 6377.157 - 6402.363: 1.8878% ( 57) 00:08:31.821 6402.363 - 6427.569: 2.3118% ( 73) 00:08:31.821 6427.569 - 6452.775: 2.7649% ( 78) 00:08:31.821 6452.775 - 6503.188: 3.7349% ( 167) 00:08:31.821 6503.188 - 6553.600: 5.4775% ( 300) 00:08:31.821 6553.600 - 6604.012: 7.2607% ( 307) 00:08:31.821 6604.012 - 6654.425: 10.0197% ( 475) 00:08:31.821 6654.425 - 6704.837: 13.6327% ( 622) 00:08:31.821 6704.837 - 6755.249: 17.8845% ( 732) 00:08:31.821 6755.249 - 6805.662: 22.3397% ( 767) 00:08:31.821 6805.662 - 6856.074: 27.6080% ( 907) 00:08:31.821 6856.074 - 6906.486: 32.2955% ( 807) 00:08:31.821 6906.486 - 6956.898: 36.7681% ( 770) 00:08:31.821 6956.898 - 7007.311: 41.3336% ( 786) 00:08:31.821 7007.311 - 7057.723: 46.2767% ( 851) 00:08:31.821 7057.723 - 7108.135: 51.1733% ( 843) 00:08:31.821 7108.135 - 7158.548: 55.0767% ( 672) 00:08:31.821 7158.548 - 7208.960: 58.7070% ( 625) 00:08:31.821 7208.960 - 7259.372: 62.1980% ( 601) 00:08:31.821 7259.372 - 7309.785: 65.1429% ( 507) 00:08:31.821 7309.785 - 7360.197: 67.5244% ( 410) 00:08:31.821 7360.197 - 7410.609: 70.0511% ( 435) 00:08:31.821 7410.609 - 7461.022: 72.0202% ( 339) 00:08:31.821 7461.022 - 7511.434: 73.7279% ( 294) 00:08:31.821 7511.434 - 7561.846: 75.3776% ( 284) 00:08:31.821 7561.846 - 7612.258: 76.7542% ( 237) 00:08:31.821 7612.258 - 7662.671: 78.1424% ( 239) 00:08:31.821 7662.671 - 7713.083: 79.5249% ( 238) 00:08:31.821 7713.083 - 7763.495: 80.8202% ( 223) 00:08:31.821 7763.495 - 7813.908: 81.9586% ( 196) 00:08:31.821 7813.908 - 7864.320: 83.0739% ( 192) 00:08:31.821 7864.320 - 7914.732: 83.9568% ( 152) 00:08:31.821 7914.732 - 7965.145: 84.9617% ( 173) 00:08:31.821 7965.145 - 8015.557: 85.8678% ( 156) 00:08:31.821 8015.557 - 8065.969: 86.4370% ( 98) 00:08:31.821 8065.969 - 8116.382: 87.1050% ( 115) 00:08:31.821 8116.382 - 8166.794: 88.0112% ( 156) 00:08:31.821 8166.794 - 8217.206: 88.7198% ( 122) 00:08:31.821 8217.206 - 8267.618: 89.6840% ( 166) 00:08:31.821 8267.618 - 8318.031: 90.2939% ( 105) 00:08:31.821 8318.031 - 8368.443: 90.7760% ( 83) 00:08:31.821 8368.443 - 8418.855: 91.1652% ( 67) 00:08:31.821 8418.855 - 8469.268: 91.5544% ( 67) 00:08:31.821 8469.268 - 8519.680: 91.9726% ( 72) 00:08:31.821 8519.680 - 8570.092: 92.4721% ( 86) 00:08:31.821 8570.092 - 8620.505: 92.7567% ( 49) 00:08:31.821 8620.505 - 8670.917: 93.0472% ( 50) 00:08:31.821 8670.917 - 8721.329: 93.5118% ( 80) 00:08:31.821 8721.329 - 8771.742: 93.7790% ( 46) 00:08:31.821 8771.742 - 8822.154: 93.9940% ( 37) 00:08:31.821 8822.154 - 8872.566: 94.1973% ( 35) 00:08:31.821 8872.566 - 8922.978: 94.4238% ( 39) 00:08:31.821 8922.978 - 8973.391: 94.6387% ( 37) 00:08:31.821 8973.391 - 9023.803: 94.7955% ( 27) 00:08:31.821 9023.803 - 9074.215: 94.9233% ( 22) 00:08:31.821 9074.215 - 9124.628: 95.0743% ( 26) 00:08:31.821 9124.628 - 9175.040: 95.1905% ( 20) 00:08:31.821 9175.040 - 9225.452: 95.3996% ( 36) 00:08:31.821 9225.452 - 9275.865: 95.6262% ( 39) 00:08:31.821 9275.865 - 9326.277: 95.8004% ( 30) 00:08:31.821 9326.277 - 9376.689: 96.0560% ( 44) 00:08:31.821 9376.689 - 9427.102: 96.2361% ( 31) 00:08:31.821 9427.102 - 9477.514: 96.3580% ( 21) 00:08:31.821 9477.514 - 9527.926: 96.4742% ( 20) 00:08:31.821 9527.926 - 9578.338: 96.6427% ( 29) 00:08:31.821 9578.338 - 9628.751: 96.8343% ( 33) 00:08:31.821 9628.751 - 9679.163: 97.0144% ( 31) 00:08:31.821 9679.163 - 9729.575: 97.1015% ( 15) 00:08:31.821 9729.575 - 9779.988: 97.1945% ( 16) 00:08:31.821 9779.988 - 9830.400: 97.2584% ( 11) 00:08:31.821 9830.400 - 9880.812: 97.3339% ( 13) 00:08:31.821 9880.812 - 9931.225: 97.3978% ( 11) 00:08:31.821 9931.225 - 9981.637: 97.4675% ( 12) 00:08:31.821 9981.637 - 10032.049: 97.5081% ( 7) 00:08:31.821 10032.049 - 10082.462: 97.5430% ( 6) 00:08:31.821 10082.462 - 10132.874: 97.6650% ( 21) 00:08:31.821 10132.874 - 10183.286: 97.8218% ( 27) 00:08:31.821 10183.286 - 10233.698: 97.9554% ( 23) 00:08:31.821 10233.698 - 10284.111: 98.1006% ( 25) 00:08:31.821 10284.111 - 10334.523: 98.2284% ( 22) 00:08:31.821 10334.523 - 10384.935: 98.3736% ( 25) 00:08:31.821 10384.935 - 10435.348: 98.5130% ( 24) 00:08:31.821 10435.348 - 10485.760: 98.7163% ( 35) 00:08:31.821 10485.760 - 10536.172: 98.7628% ( 8) 00:08:31.821 10536.172 - 10586.585: 98.8325% ( 12) 00:08:31.821 10586.585 - 10636.997: 98.8731% ( 7) 00:08:31.821 10636.997 - 10687.409: 98.9196% ( 8) 00:08:31.821 10687.409 - 10737.822: 98.9487% ( 5) 00:08:31.821 10737.822 - 10788.234: 98.9893% ( 7) 00:08:31.821 10788.234 - 10838.646: 99.0125% ( 4) 00:08:31.821 10838.646 - 10889.058: 99.0300% ( 3) 00:08:31.821 10889.058 - 10939.471: 99.0532% ( 4) 00:08:31.821 10939.471 - 10989.883: 99.0706% ( 3) 00:08:31.821 10989.883 - 11040.295: 99.0997% ( 5) 00:08:31.821 11040.295 - 11090.708: 99.1578% ( 10) 00:08:31.821 11090.708 - 11141.120: 99.2275% ( 12) 00:08:31.821 11141.120 - 11191.532: 99.2333% ( 1) 00:08:31.821 11191.532 - 11241.945: 99.2507% ( 3) 00:08:31.821 11241.945 - 11292.357: 99.2565% ( 1) 00:08:31.821 21979.766 - 22080.591: 99.2681% ( 2) 00:08:31.821 22080.591 - 22181.415: 99.3088% ( 7) 00:08:31.821 22181.415 - 22282.240: 99.3378% ( 5) 00:08:31.821 22282.240 - 22383.065: 99.3785% ( 7) 00:08:31.821 22383.065 - 22483.889: 99.4075% ( 5) 00:08:31.821 22483.889 - 22584.714: 99.4482% ( 7) 00:08:31.821 22584.714 - 22685.538: 99.4656% ( 3) 00:08:31.821 22685.538 - 22786.363: 99.4830% ( 3) 00:08:31.821 22786.363 - 22887.188: 99.5005% ( 3) 00:08:31.821 22887.188 - 22988.012: 99.5121% ( 2) 00:08:31.821 22988.012 - 23088.837: 99.5295% ( 3) 00:08:31.821 23088.837 - 23189.662: 99.5469% ( 3) 00:08:31.821 23189.662 - 23290.486: 99.5644% ( 3) 00:08:31.821 23290.486 - 23391.311: 99.5760% ( 2) 00:08:31.821 23391.311 - 23492.135: 99.5934% ( 3) 00:08:31.821 23492.135 - 23592.960: 99.6050% ( 2) 00:08:31.821 23592.960 - 23693.785: 99.6224% ( 3) 00:08:31.822 23693.785 - 23794.609: 99.6283% ( 1) 00:08:31.822 27020.997 - 27222.646: 99.6573% ( 5) 00:08:31.822 27222.646 - 27424.295: 99.7096% ( 9) 00:08:31.822 27424.295 - 27625.945: 99.7444% ( 6) 00:08:31.822 28230.892 - 28432.542: 99.7735% ( 5) 00:08:31.822 28432.542 - 28634.191: 99.8083% ( 6) 00:08:31.822 28634.191 - 28835.840: 99.8374% ( 5) 00:08:31.822 28835.840 - 29037.489: 99.8664% ( 5) 00:08:31.822 29037.489 - 29239.138: 99.8954% ( 5) 00:08:31.822 29239.138 - 29440.788: 99.9303% ( 6) 00:08:31.822 29440.788 - 29642.437: 99.9651% ( 6) 00:08:31.822 29642.437 - 29844.086: 100.0000% ( 6) 00:08:31.822 00:08:31.822 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:31.822 ============================================================================== 00:08:31.822 Range in us Cumulative IO count 00:08:31.822 6074.683 - 6099.889: 0.0058% ( 1) 00:08:31.822 6099.889 - 6125.095: 0.0232% ( 3) 00:08:31.822 6125.095 - 6150.302: 0.0407% ( 3) 00:08:31.822 6150.302 - 6175.508: 0.0755% ( 6) 00:08:31.822 6175.508 - 6200.714: 0.1452% ( 12) 00:08:31.822 6200.714 - 6225.920: 0.2091% ( 11) 00:08:31.822 6225.920 - 6251.126: 0.2904% ( 14) 00:08:31.822 6251.126 - 6276.332: 0.4182% ( 22) 00:08:31.822 6276.332 - 6301.538: 0.5867% ( 29) 00:08:31.822 6301.538 - 6326.745: 0.7783% ( 33) 00:08:31.822 6326.745 - 6351.951: 1.0455% ( 46) 00:08:31.822 6351.951 - 6377.157: 1.4115% ( 63) 00:08:31.822 6377.157 - 6402.363: 1.7484% ( 58) 00:08:31.822 6402.363 - 6427.569: 2.2653% ( 89) 00:08:31.822 6427.569 - 6452.775: 2.9043% ( 110) 00:08:31.822 6452.775 - 6503.188: 4.2344% ( 229) 00:08:31.822 6503.188 - 6553.600: 5.5123% ( 220) 00:08:31.822 6553.600 - 6604.012: 7.6499% ( 368) 00:08:31.822 6604.012 - 6654.425: 10.1243% ( 426) 00:08:31.822 6654.425 - 6704.837: 13.5746% ( 594) 00:08:31.822 6704.837 - 6755.249: 17.4954% ( 675) 00:08:31.822 6755.249 - 6805.662: 22.4617% ( 855) 00:08:31.822 6805.662 - 6856.074: 27.6836% ( 899) 00:08:31.822 6856.074 - 6906.486: 32.7660% ( 875) 00:08:31.822 6906.486 - 6956.898: 38.5862% ( 1002) 00:08:31.822 6956.898 - 7007.311: 43.5990% ( 863) 00:08:31.822 7007.311 - 7057.723: 47.8915% ( 739) 00:08:31.822 7057.723 - 7108.135: 52.2653% ( 753) 00:08:31.822 7108.135 - 7158.548: 56.1164% ( 663) 00:08:31.822 7158.548 - 7208.960: 59.9907% ( 667) 00:08:31.822 7208.960 - 7259.372: 63.0809% ( 532) 00:08:31.822 7259.372 - 7309.785: 66.1187% ( 523) 00:08:31.822 7309.785 - 7360.197: 68.2795% ( 372) 00:08:31.822 7360.197 - 7410.609: 69.7375% ( 251) 00:08:31.822 7410.609 - 7461.022: 71.3058% ( 270) 00:08:31.822 7461.022 - 7511.434: 72.9844% ( 289) 00:08:31.822 7511.434 - 7561.846: 74.7328% ( 301) 00:08:31.822 7561.846 - 7612.258: 76.1443% ( 243) 00:08:31.822 7612.258 - 7662.671: 77.5441% ( 241) 00:08:31.822 7662.671 - 7713.083: 78.9498% ( 242) 00:08:31.822 7713.083 - 7763.495: 80.0418% ( 188) 00:08:31.822 7763.495 - 7813.908: 81.4126% ( 236) 00:08:31.822 7813.908 - 7864.320: 82.4059% ( 171) 00:08:31.822 7864.320 - 7914.732: 83.3992% ( 171) 00:08:31.822 7914.732 - 7965.145: 84.4621% ( 183) 00:08:31.822 7965.145 - 8015.557: 85.6122% ( 198) 00:08:31.822 8015.557 - 8065.969: 86.2512% ( 110) 00:08:31.822 8065.969 - 8116.382: 86.9656% ( 123) 00:08:31.822 8116.382 - 8166.794: 87.8311% ( 149) 00:08:31.822 8166.794 - 8217.206: 88.6559% ( 142) 00:08:31.822 8217.206 - 8267.618: 89.2716% ( 106) 00:08:31.822 8267.618 - 8318.031: 89.8002% ( 91) 00:08:31.822 8318.031 - 8368.443: 90.5379% ( 127) 00:08:31.822 8368.443 - 8418.855: 90.8864% ( 60) 00:08:31.822 8418.855 - 8469.268: 91.2175% ( 57) 00:08:31.822 8469.268 - 8519.680: 91.5950% ( 65) 00:08:31.822 8519.680 - 8570.092: 92.2630% ( 115) 00:08:31.822 8570.092 - 8620.505: 92.7045% ( 76) 00:08:31.822 8620.505 - 8670.917: 92.9833% ( 48) 00:08:31.822 8670.917 - 8721.329: 93.2563% ( 47) 00:08:31.822 8721.329 - 8771.742: 93.6164% ( 62) 00:08:31.822 8771.742 - 8822.154: 93.8313% ( 37) 00:08:31.822 8822.154 - 8872.566: 94.0927% ( 45) 00:08:31.822 8872.566 - 8922.978: 94.4238% ( 57) 00:08:31.822 8922.978 - 8973.391: 94.8536% ( 74) 00:08:31.822 8973.391 - 9023.803: 95.0279% ( 30) 00:08:31.822 9023.803 - 9074.215: 95.1963% ( 29) 00:08:31.822 9074.215 - 9124.628: 95.3880% ( 33) 00:08:31.822 9124.628 - 9175.040: 95.6320% ( 42) 00:08:31.822 9175.040 - 9225.452: 95.8237% ( 33) 00:08:31.822 9225.452 - 9275.865: 96.0386% ( 37) 00:08:31.822 9275.865 - 9326.277: 96.3290% ( 50) 00:08:31.822 9326.277 - 9376.689: 96.7124% ( 66) 00:08:31.822 9376.689 - 9427.102: 96.8692% ( 27) 00:08:31.822 9427.102 - 9477.514: 97.0086% ( 24) 00:08:31.822 9477.514 - 9527.926: 97.1132% ( 18) 00:08:31.822 9527.926 - 9578.338: 97.3571% ( 42) 00:08:31.822 9578.338 - 9628.751: 97.4210% ( 11) 00:08:31.822 9628.751 - 9679.163: 97.4791% ( 10) 00:08:31.822 9679.163 - 9729.575: 97.5430% ( 11) 00:08:31.822 9729.575 - 9779.988: 97.5836% ( 7) 00:08:31.822 9779.988 - 9830.400: 97.6127% ( 5) 00:08:31.822 9830.400 - 9880.812: 97.6475% ( 6) 00:08:31.822 9880.812 - 9931.225: 97.6882% ( 7) 00:08:31.822 9931.225 - 9981.637: 97.7114% ( 4) 00:08:31.822 9981.637 - 10032.049: 97.7347% ( 4) 00:08:31.822 10032.049 - 10082.462: 97.7579% ( 4) 00:08:31.822 10082.462 - 10132.874: 97.7928% ( 6) 00:08:31.822 10132.874 - 10183.286: 97.8160% ( 4) 00:08:31.822 10183.286 - 10233.698: 97.8392% ( 4) 00:08:31.822 10233.698 - 10284.111: 97.8915% ( 9) 00:08:31.822 10284.111 - 10334.523: 97.9844% ( 16) 00:08:31.822 10334.523 - 10384.935: 98.0367% ( 9) 00:08:31.822 10384.935 - 10435.348: 98.0599% ( 4) 00:08:31.822 10435.348 - 10485.760: 98.0948% ( 6) 00:08:31.822 10485.760 - 10536.172: 98.1471% ( 9) 00:08:31.822 10536.172 - 10586.585: 98.2284% ( 14) 00:08:31.822 10586.585 - 10636.997: 98.2865% ( 10) 00:08:31.822 10636.997 - 10687.409: 98.3620% ( 13) 00:08:31.822 10687.409 - 10737.822: 98.4201% ( 10) 00:08:31.822 10737.822 - 10788.234: 98.4724% ( 9) 00:08:31.822 10788.234 - 10838.646: 98.5304% ( 10) 00:08:31.822 10838.646 - 10889.058: 98.5943% ( 11) 00:08:31.822 10889.058 - 10939.471: 98.7163% ( 21) 00:08:31.822 10939.471 - 10989.883: 98.8731% ( 27) 00:08:31.822 10989.883 - 11040.295: 98.9951% ( 21) 00:08:31.822 11040.295 - 11090.708: 99.0648% ( 12) 00:08:31.822 11090.708 - 11141.120: 99.0997% ( 6) 00:08:31.822 11141.120 - 11191.532: 99.1287% ( 5) 00:08:31.822 11191.532 - 11241.945: 99.1461% ( 3) 00:08:31.822 11241.945 - 11292.357: 99.1636% ( 3) 00:08:31.822 11292.357 - 11342.769: 99.1810% ( 3) 00:08:31.822 11342.769 - 11393.182: 99.1984% ( 3) 00:08:31.822 11393.182 - 11443.594: 99.2100% ( 2) 00:08:31.822 11443.594 - 11494.006: 99.2217% ( 2) 00:08:31.822 11494.006 - 11544.418: 99.2333% ( 2) 00:08:31.822 11544.418 - 11594.831: 99.2449% ( 2) 00:08:31.822 11594.831 - 11645.243: 99.2565% ( 2) 00:08:31.822 20366.572 - 20467.397: 99.2681% ( 2) 00:08:31.822 20467.397 - 20568.222: 99.2914% ( 4) 00:08:31.822 20568.222 - 20669.046: 99.3146% ( 4) 00:08:31.822 20669.046 - 20769.871: 99.3378% ( 4) 00:08:31.822 20769.871 - 20870.695: 99.3669% ( 5) 00:08:31.822 20870.695 - 20971.520: 99.3843% ( 3) 00:08:31.822 20971.520 - 21072.345: 99.4133% ( 5) 00:08:31.822 21072.345 - 21173.169: 99.4366% ( 4) 00:08:31.822 21173.169 - 21273.994: 99.4540% ( 3) 00:08:31.822 21273.994 - 21374.818: 99.4714% ( 3) 00:08:31.822 21374.818 - 21475.643: 99.4888% ( 3) 00:08:31.822 21475.643 - 21576.468: 99.5063% ( 3) 00:08:31.822 21576.468 - 21677.292: 99.5237% ( 3) 00:08:31.822 21677.292 - 21778.117: 99.5411% ( 3) 00:08:31.822 21778.117 - 21878.942: 99.5586% ( 3) 00:08:31.822 21878.942 - 21979.766: 99.5760% ( 3) 00:08:31.822 21979.766 - 22080.591: 99.5934% ( 3) 00:08:31.822 22080.591 - 22181.415: 99.6050% ( 2) 00:08:31.822 22181.415 - 22282.240: 99.6224% ( 3) 00:08:31.822 22282.240 - 22383.065: 99.6283% ( 1) 00:08:31.822 25811.102 - 26012.751: 99.6573% ( 5) 00:08:31.822 26012.751 - 26214.400: 99.7328% ( 13) 00:08:31.822 27020.997 - 27222.646: 99.7793% ( 8) 00:08:31.822 27222.646 - 27424.295: 99.8257% ( 8) 00:08:31.822 27424.295 - 27625.945: 99.8664% ( 7) 00:08:31.822 27625.945 - 27827.594: 99.9129% ( 8) 00:08:31.822 27827.594 - 28029.243: 99.9535% ( 7) 00:08:31.822 28029.243 - 28230.892: 99.9942% ( 7) 00:08:31.822 28230.892 - 28432.542: 100.0000% ( 1) 00:08:31.822 00:08:31.822 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:31.822 ============================================================================== 00:08:31.822 Range in us Cumulative IO count 00:08:31.822 6049.477 - 6074.683: 0.0116% ( 2) 00:08:31.822 6074.683 - 6099.889: 0.0407% ( 5) 00:08:31.822 6099.889 - 6125.095: 0.0639% ( 4) 00:08:31.822 6125.095 - 6150.302: 0.0813% ( 3) 00:08:31.822 6150.302 - 6175.508: 0.1046% ( 4) 00:08:31.822 6175.508 - 6200.714: 0.1452% ( 7) 00:08:31.822 6200.714 - 6225.920: 0.2033% ( 10) 00:08:31.822 6225.920 - 6251.126: 0.2846% ( 14) 00:08:31.822 6251.126 - 6276.332: 0.3950% ( 19) 00:08:31.822 6276.332 - 6301.538: 0.5170% ( 21) 00:08:31.822 6301.538 - 6326.745: 0.7261% ( 36) 00:08:31.822 6326.745 - 6351.951: 0.9584% ( 40) 00:08:31.822 6351.951 - 6377.157: 1.1966% ( 41) 00:08:31.822 6377.157 - 6402.363: 1.6090% ( 71) 00:08:31.822 6402.363 - 6427.569: 2.0795% ( 81) 00:08:31.822 6427.569 - 6452.775: 2.5325% ( 78) 00:08:31.822 6452.775 - 6503.188: 3.9033% ( 236) 00:08:31.822 6503.188 - 6553.600: 5.5414% ( 282) 00:08:31.822 6553.600 - 6604.012: 7.1736% ( 281) 00:08:31.822 6604.012 - 6654.425: 9.5086% ( 402) 00:08:31.823 6654.425 - 6704.837: 13.1447% ( 626) 00:08:31.823 6704.837 - 6755.249: 17.1991% ( 698) 00:08:31.823 6755.249 - 6805.662: 22.6417% ( 937) 00:08:31.823 6805.662 - 6856.074: 28.1424% ( 947) 00:08:31.823 6856.074 - 6906.486: 33.4282% ( 910) 00:08:31.823 6906.486 - 6956.898: 38.9812% ( 956) 00:08:31.823 6956.898 - 7007.311: 44.0056% ( 865) 00:08:31.823 7007.311 - 7057.723: 48.5653% ( 785) 00:08:31.823 7057.723 - 7108.135: 51.9168% ( 577) 00:08:31.823 7108.135 - 7158.548: 55.3148% ( 585) 00:08:31.823 7158.548 - 7208.960: 58.7941% ( 599) 00:08:31.823 7208.960 - 7259.372: 61.9017% ( 535) 00:08:31.823 7259.372 - 7309.785: 64.5969% ( 464) 00:08:31.823 7309.785 - 7360.197: 67.2165% ( 451) 00:08:31.823 7360.197 - 7410.609: 69.3715% ( 371) 00:08:31.823 7410.609 - 7461.022: 71.2593% ( 325) 00:08:31.823 7461.022 - 7511.434: 72.8566% ( 275) 00:08:31.823 7511.434 - 7561.846: 74.5818% ( 297) 00:08:31.823 7561.846 - 7612.258: 75.6912% ( 191) 00:08:31.823 7612.258 - 7662.671: 76.8007% ( 191) 00:08:31.823 7662.671 - 7713.083: 78.6071% ( 311) 00:08:31.823 7713.083 - 7763.495: 79.8385% ( 212) 00:08:31.823 7763.495 - 7813.908: 81.0757% ( 213) 00:08:31.823 7813.908 - 7864.320: 82.1329% ( 182) 00:08:31.823 7864.320 - 7914.732: 83.3527% ( 210) 00:08:31.823 7914.732 - 7965.145: 84.7932% ( 248) 00:08:31.823 7965.145 - 8015.557: 85.9375% ( 197) 00:08:31.823 8015.557 - 8065.969: 86.8204% ( 152) 00:08:31.823 8065.969 - 8116.382: 87.6684% ( 146) 00:08:31.823 8116.382 - 8166.794: 88.4758% ( 139) 00:08:31.823 8166.794 - 8217.206: 89.3239% ( 146) 00:08:31.823 8217.206 - 8267.618: 89.8060% ( 83) 00:08:31.823 8267.618 - 8318.031: 90.2474% ( 76) 00:08:31.823 8318.031 - 8368.443: 90.6831% ( 75) 00:08:31.823 8368.443 - 8418.855: 91.0897% ( 70) 00:08:31.823 8418.855 - 8469.268: 91.5195% ( 74) 00:08:31.823 8469.268 - 8519.680: 92.0597% ( 93) 00:08:31.823 8519.680 - 8570.092: 92.4024% ( 59) 00:08:31.823 8570.092 - 8620.505: 92.7567% ( 61) 00:08:31.823 8620.505 - 8670.917: 93.1633% ( 70) 00:08:31.823 8670.917 - 8721.329: 93.5932% ( 74) 00:08:31.823 8721.329 - 8771.742: 93.9126% ( 55) 00:08:31.823 8771.742 - 8822.154: 94.2147% ( 52) 00:08:31.823 8822.154 - 8872.566: 94.3889% ( 30) 00:08:31.823 8872.566 - 8922.978: 94.5864% ( 34) 00:08:31.823 8922.978 - 8973.391: 94.7549% ( 29) 00:08:31.823 8973.391 - 9023.803: 95.0453% ( 50) 00:08:31.823 9023.803 - 9074.215: 95.4751% ( 74) 00:08:31.823 9074.215 - 9124.628: 95.7539% ( 48) 00:08:31.823 9124.628 - 9175.040: 95.9282% ( 30) 00:08:31.823 9175.040 - 9225.452: 96.0792% ( 26) 00:08:31.823 9225.452 - 9275.865: 96.4452% ( 63) 00:08:31.823 9275.865 - 9326.277: 96.6020% ( 27) 00:08:31.823 9326.277 - 9376.689: 96.7704% ( 29) 00:08:31.823 9376.689 - 9427.102: 96.9331% ( 28) 00:08:31.823 9427.102 - 9477.514: 97.0841% ( 26) 00:08:31.823 9477.514 - 9527.926: 97.2874% ( 35) 00:08:31.823 9527.926 - 9578.338: 97.3862% ( 17) 00:08:31.823 9578.338 - 9628.751: 97.4442% ( 10) 00:08:31.823 9628.751 - 9679.163: 97.4907% ( 8) 00:08:31.823 9679.163 - 9729.575: 97.5197% ( 5) 00:08:31.823 9729.575 - 9779.988: 97.5430% ( 4) 00:08:31.823 9779.988 - 9830.400: 97.5488% ( 1) 00:08:31.823 9830.400 - 9880.812: 97.5604% ( 2) 00:08:31.823 9880.812 - 9931.225: 97.5778% ( 3) 00:08:31.823 9931.225 - 9981.637: 97.5953% ( 3) 00:08:31.823 9981.637 - 10032.049: 97.6069% ( 2) 00:08:31.823 10032.049 - 10082.462: 97.6824% ( 13) 00:08:31.823 10082.462 - 10132.874: 97.7928% ( 19) 00:08:31.823 10132.874 - 10183.286: 97.9438% ( 26) 00:08:31.823 10183.286 - 10233.698: 98.0193% ( 13) 00:08:31.823 10233.698 - 10284.111: 98.0599% ( 7) 00:08:31.823 10284.111 - 10334.523: 98.1006% ( 7) 00:08:31.823 10334.523 - 10384.935: 98.1471% ( 8) 00:08:31.823 10384.935 - 10435.348: 98.1819% ( 6) 00:08:31.823 10435.348 - 10485.760: 98.2284% ( 8) 00:08:31.823 10485.760 - 10536.172: 98.2749% ( 8) 00:08:31.823 10536.172 - 10586.585: 98.3097% ( 6) 00:08:31.823 10586.585 - 10636.997: 98.3388% ( 5) 00:08:31.823 10636.997 - 10687.409: 98.3794% ( 7) 00:08:31.823 10687.409 - 10737.822: 98.4375% ( 10) 00:08:31.823 10737.822 - 10788.234: 98.4956% ( 10) 00:08:31.823 10788.234 - 10838.646: 98.5653% ( 12) 00:08:31.823 10838.646 - 10889.058: 98.6408% ( 13) 00:08:31.823 10889.058 - 10939.471: 98.7454% ( 18) 00:08:31.823 10939.471 - 10989.883: 98.8325% ( 15) 00:08:31.823 10989.883 - 11040.295: 98.8731% ( 7) 00:08:31.823 11040.295 - 11090.708: 98.9022% ( 5) 00:08:31.823 11090.708 - 11141.120: 98.9312% ( 5) 00:08:31.823 11141.120 - 11191.532: 98.9545% ( 4) 00:08:31.823 11191.532 - 11241.945: 98.9777% ( 4) 00:08:31.823 11241.945 - 11292.357: 99.0358% ( 10) 00:08:31.823 11292.357 - 11342.769: 99.1055% ( 12) 00:08:31.823 11342.769 - 11393.182: 99.1694% ( 11) 00:08:31.823 11393.182 - 11443.594: 99.1926% ( 4) 00:08:31.823 11443.594 - 11494.006: 99.2158% ( 4) 00:08:31.823 11494.006 - 11544.418: 99.2333% ( 3) 00:08:31.823 11544.418 - 11594.831: 99.2449% ( 2) 00:08:31.823 11594.831 - 11645.243: 99.2507% ( 1) 00:08:31.823 11645.243 - 11695.655: 99.2565% ( 1) 00:08:31.823 19156.677 - 19257.502: 99.2623% ( 1) 00:08:31.823 19257.502 - 19358.326: 99.2914% ( 5) 00:08:31.823 19358.326 - 19459.151: 99.3204% ( 5) 00:08:31.823 19459.151 - 19559.975: 99.3494% ( 5) 00:08:31.823 19559.975 - 19660.800: 99.3727% ( 4) 00:08:31.823 19660.800 - 19761.625: 99.4017% ( 5) 00:08:31.823 19761.625 - 19862.449: 99.4250% ( 4) 00:08:31.823 19862.449 - 19963.274: 99.4424% ( 3) 00:08:31.823 19963.274 - 20064.098: 99.4598% ( 3) 00:08:31.823 20064.098 - 20164.923: 99.4772% ( 3) 00:08:31.823 20164.923 - 20265.748: 99.4947% ( 3) 00:08:31.823 20265.748 - 20366.572: 99.5121% ( 3) 00:08:31.823 20366.572 - 20467.397: 99.5295% ( 3) 00:08:31.823 20467.397 - 20568.222: 99.5469% ( 3) 00:08:31.823 20568.222 - 20669.046: 99.5644% ( 3) 00:08:31.823 20669.046 - 20769.871: 99.5818% ( 3) 00:08:31.823 20769.871 - 20870.695: 99.5992% ( 3) 00:08:31.823 20870.695 - 20971.520: 99.6224% ( 4) 00:08:31.823 20971.520 - 21072.345: 99.6283% ( 1) 00:08:31.823 24298.732 - 24399.557: 99.6747% ( 8) 00:08:31.823 24399.557 - 24500.382: 99.7560% ( 14) 00:08:31.823 24500.382 - 24601.206: 99.7851% ( 5) 00:08:31.823 24601.206 - 24702.031: 99.7909% ( 1) 00:08:31.823 25508.628 - 25609.452: 99.8083% ( 3) 00:08:31.823 25609.452 - 25710.277: 99.8316% ( 4) 00:08:31.823 25710.277 - 25811.102: 99.8548% ( 4) 00:08:31.823 25811.102 - 26012.751: 99.9013% ( 8) 00:08:31.823 26012.751 - 26214.400: 99.9419% ( 7) 00:08:31.823 26214.400 - 26416.049: 99.9942% ( 9) 00:08:31.823 26416.049 - 26617.698: 100.0000% ( 1) 00:08:31.823 00:08:31.823 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:31.823 ============================================================================== 00:08:31.823 Range in us Cumulative IO count 00:08:31.823 6074.683 - 6099.889: 0.0116% ( 2) 00:08:31.823 6099.889 - 6125.095: 0.0290% ( 3) 00:08:31.823 6125.095 - 6150.302: 0.0523% ( 4) 00:08:31.823 6150.302 - 6175.508: 0.0697% ( 3) 00:08:31.823 6175.508 - 6200.714: 0.1104% ( 7) 00:08:31.823 6200.714 - 6225.920: 0.1568% ( 8) 00:08:31.823 6225.920 - 6251.126: 0.2614% ( 18) 00:08:31.823 6251.126 - 6276.332: 0.4066% ( 25) 00:08:31.823 6276.332 - 6301.538: 0.5460% ( 24) 00:08:31.823 6301.538 - 6326.745: 0.6970% ( 26) 00:08:31.823 6326.745 - 6351.951: 0.9410% ( 42) 00:08:31.823 6351.951 - 6377.157: 1.2779% ( 58) 00:08:31.823 6377.157 - 6402.363: 1.9459% ( 115) 00:08:31.823 6402.363 - 6427.569: 2.4744% ( 91) 00:08:31.823 6427.569 - 6452.775: 2.8520% ( 65) 00:08:31.823 6452.775 - 6503.188: 3.8395% ( 170) 00:08:31.823 6503.188 - 6553.600: 5.3264% ( 256) 00:08:31.823 6553.600 - 6604.012: 7.4640% ( 368) 00:08:31.823 6604.012 - 6654.425: 10.0314% ( 442) 00:08:31.823 6654.425 - 6704.837: 13.0576% ( 521) 00:08:31.823 6704.837 - 6755.249: 17.3385% ( 737) 00:08:31.823 6755.249 - 6805.662: 22.2177% ( 840) 00:08:31.823 6805.662 - 6856.074: 27.8346% ( 967) 00:08:31.823 6856.074 - 6906.486: 33.6838% ( 1007) 00:08:31.823 6906.486 - 6956.898: 38.4875% ( 827) 00:08:31.823 6956.898 - 7007.311: 43.6687% ( 892) 00:08:31.823 7007.311 - 7057.723: 48.5943% ( 848) 00:08:31.824 7057.723 - 7108.135: 52.7939% ( 723) 00:08:31.824 7108.135 - 7158.548: 56.4068% ( 622) 00:08:31.824 7158.548 - 7208.960: 59.2066% ( 482) 00:08:31.824 7208.960 - 7259.372: 62.2967% ( 532) 00:08:31.824 7259.372 - 7309.785: 65.3055% ( 518) 00:08:31.824 7309.785 - 7360.197: 67.5418% ( 385) 00:08:31.824 7360.197 - 7410.609: 69.9349% ( 412) 00:08:31.824 7410.609 - 7461.022: 71.9447% ( 346) 00:08:31.824 7461.022 - 7511.434: 73.8731% ( 332) 00:08:31.824 7511.434 - 7561.846: 75.0987% ( 211) 00:08:31.824 7561.846 - 7612.258: 76.2837% ( 204) 00:08:31.824 7612.258 - 7662.671: 77.4164% ( 195) 00:08:31.824 7662.671 - 7713.083: 78.4329% ( 175) 00:08:31.824 7713.083 - 7763.495: 79.9721% ( 265) 00:08:31.824 7763.495 - 7813.908: 81.1513% ( 203) 00:08:31.824 7813.908 - 7864.320: 82.0051% ( 147) 00:08:31.824 7864.320 - 7914.732: 82.9054% ( 155) 00:08:31.824 7914.732 - 7965.145: 84.2472% ( 231) 00:08:31.824 7965.145 - 8015.557: 85.2637% ( 175) 00:08:31.824 8015.557 - 8065.969: 86.1815% ( 158) 00:08:31.824 8065.969 - 8116.382: 87.0295% ( 146) 00:08:31.824 8116.382 - 8166.794: 87.9414% ( 157) 00:08:31.824 8166.794 - 8217.206: 88.8476% ( 156) 00:08:31.824 8217.206 - 8267.618: 89.6608% ( 140) 00:08:31.824 8267.618 - 8318.031: 90.0848% ( 73) 00:08:31.824 8318.031 - 8368.443: 90.4740% ( 67) 00:08:31.824 8368.443 - 8418.855: 91.0316% ( 96) 00:08:31.824 8418.855 - 8469.268: 91.7054% ( 116) 00:08:31.824 8469.268 - 8519.680: 92.1120% ( 70) 00:08:31.824 8519.680 - 8570.092: 92.5070% ( 68) 00:08:31.824 8570.092 - 8620.505: 93.0646% ( 96) 00:08:31.824 8620.505 - 8670.917: 93.3841% ( 55) 00:08:31.824 8670.917 - 8721.329: 93.6454% ( 45) 00:08:31.824 8721.329 - 8771.742: 93.8197% ( 30) 00:08:31.824 8771.742 - 8822.154: 94.0056% ( 32) 00:08:31.824 8822.154 - 8872.566: 94.2205% ( 37) 00:08:31.824 8872.566 - 8922.978: 94.5109% ( 50) 00:08:31.824 8922.978 - 8973.391: 94.7607% ( 43) 00:08:31.824 8973.391 - 9023.803: 95.0279% ( 46) 00:08:31.824 9023.803 - 9074.215: 95.2370% ( 36) 00:08:31.824 9074.215 - 9124.628: 95.3880% ( 26) 00:08:31.824 9124.628 - 9175.040: 95.5913% ( 35) 00:08:31.824 9175.040 - 9225.452: 95.8295% ( 41) 00:08:31.824 9225.452 - 9275.865: 95.9631% ( 23) 00:08:31.824 9275.865 - 9326.277: 96.1141% ( 26) 00:08:31.824 9326.277 - 9376.689: 96.2303% ( 20) 00:08:31.824 9376.689 - 9427.102: 96.3348% ( 18) 00:08:31.824 9427.102 - 9477.514: 96.4510% ( 20) 00:08:31.824 9477.514 - 9527.926: 96.5323% ( 14) 00:08:31.824 9527.926 - 9578.338: 96.5904% ( 10) 00:08:31.824 9578.338 - 9628.751: 96.6949% ( 18) 00:08:31.824 9628.751 - 9679.163: 96.8285% ( 23) 00:08:31.824 9679.163 - 9729.575: 96.9912% ( 28) 00:08:31.824 9729.575 - 9779.988: 97.1306% ( 24) 00:08:31.824 9779.988 - 9830.400: 97.2990% ( 29) 00:08:31.824 9830.400 - 9880.812: 97.5720% ( 47) 00:08:31.824 9880.812 - 9931.225: 97.6766% ( 18) 00:08:31.824 9931.225 - 9981.637: 97.7579% ( 14) 00:08:31.824 9981.637 - 10032.049: 97.8450% ( 15) 00:08:31.824 10032.049 - 10082.462: 97.9322% ( 15) 00:08:31.824 10082.462 - 10132.874: 98.1006% ( 29) 00:08:31.824 10132.874 - 10183.286: 98.1587% ( 10) 00:08:31.824 10183.286 - 10233.698: 98.1993% ( 7) 00:08:31.824 10233.698 - 10284.111: 98.2342% ( 6) 00:08:31.824 10284.111 - 10334.523: 98.2807% ( 8) 00:08:31.824 10334.523 - 10384.935: 98.3271% ( 8) 00:08:31.824 10384.935 - 10435.348: 98.3910% ( 11) 00:08:31.824 10435.348 - 10485.760: 98.4433% ( 9) 00:08:31.824 10485.760 - 10536.172: 98.5362% ( 16) 00:08:31.824 10536.172 - 10586.585: 98.5827% ( 8) 00:08:31.824 10586.585 - 10636.997: 98.6524% ( 12) 00:08:31.824 10636.997 - 10687.409: 98.7163% ( 11) 00:08:31.824 10687.409 - 10737.822: 98.7512% ( 6) 00:08:31.824 10737.822 - 10788.234: 98.8034% ( 9) 00:08:31.824 10788.234 - 10838.646: 98.8209% ( 3) 00:08:31.824 10838.646 - 10889.058: 98.8499% ( 5) 00:08:31.824 10889.058 - 10939.471: 98.8731% ( 4) 00:08:31.824 10939.471 - 10989.883: 98.9022% ( 5) 00:08:31.824 10989.883 - 11040.295: 98.9254% ( 4) 00:08:31.824 11040.295 - 11090.708: 98.9545% ( 5) 00:08:31.824 11090.708 - 11141.120: 99.1403% ( 32) 00:08:31.824 11141.120 - 11191.532: 99.1578% ( 3) 00:08:31.824 11191.532 - 11241.945: 99.1752% ( 3) 00:08:31.824 11241.945 - 11292.357: 99.1984% ( 4) 00:08:31.824 11292.357 - 11342.769: 99.2042% ( 1) 00:08:31.824 11342.769 - 11393.182: 99.2100% ( 1) 00:08:31.824 11393.182 - 11443.594: 99.2217% ( 2) 00:08:31.824 11443.594 - 11494.006: 99.2275% ( 1) 00:08:31.824 11494.006 - 11544.418: 99.2333% ( 1) 00:08:31.824 11544.418 - 11594.831: 99.2391% ( 1) 00:08:31.824 11594.831 - 11645.243: 99.2449% ( 1) 00:08:31.824 11645.243 - 11695.655: 99.2507% ( 1) 00:08:31.824 11695.655 - 11746.068: 99.2565% ( 1) 00:08:31.824 17946.782 - 18047.606: 99.2623% ( 1) 00:08:31.824 18047.606 - 18148.431: 99.2797% ( 3) 00:08:31.824 18148.431 - 18249.255: 99.3146% ( 6) 00:08:31.824 18249.255 - 18350.080: 99.3378% ( 4) 00:08:31.824 18350.080 - 18450.905: 99.3669% ( 5) 00:08:31.824 18450.905 - 18551.729: 99.3901% ( 4) 00:08:31.824 18551.729 - 18652.554: 99.4133% ( 4) 00:08:31.824 18652.554 - 18753.378: 99.4366% ( 4) 00:08:31.824 18753.378 - 18854.203: 99.4540% ( 3) 00:08:31.824 18854.203 - 18955.028: 99.4714% ( 3) 00:08:31.824 18955.028 - 19055.852: 99.4888% ( 3) 00:08:31.824 19055.852 - 19156.677: 99.5063% ( 3) 00:08:31.824 19156.677 - 19257.502: 99.5179% ( 2) 00:08:31.824 19257.502 - 19358.326: 99.5353% ( 3) 00:08:31.824 19358.326 - 19459.151: 99.5586% ( 4) 00:08:31.824 19459.151 - 19559.975: 99.5760% ( 3) 00:08:31.824 19559.975 - 19660.800: 99.5934% ( 3) 00:08:31.824 19660.800 - 19761.625: 99.6108% ( 3) 00:08:31.824 19761.625 - 19862.449: 99.6283% ( 3) 00:08:31.824 22483.889 - 22584.714: 99.6457% ( 3) 00:08:31.824 22584.714 - 22685.538: 99.6689% ( 4) 00:08:31.824 22685.538 - 22786.363: 99.6863% ( 3) 00:08:31.824 22786.363 - 22887.188: 99.7154% ( 5) 00:08:31.824 22887.188 - 22988.012: 99.7386% ( 4) 00:08:31.824 22988.012 - 23088.837: 99.8316% ( 16) 00:08:31.824 23088.837 - 23189.662: 99.8780% ( 8) 00:08:31.824 24097.083 - 24197.908: 99.8896% ( 2) 00:08:31.824 24197.908 - 24298.732: 99.9071% ( 3) 00:08:31.824 24298.732 - 24399.557: 99.9245% ( 3) 00:08:31.824 24399.557 - 24500.382: 99.9477% ( 4) 00:08:31.824 24500.382 - 24601.206: 99.9710% ( 4) 00:08:31.824 24601.206 - 24702.031: 99.9942% ( 4) 00:08:31.824 24702.031 - 24802.855: 100.0000% ( 1) 00:08:31.824 00:08:31.824 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:31.824 ============================================================================== 00:08:31.824 Range in us Cumulative IO count 00:08:31.824 6074.683 - 6099.889: 0.0058% ( 1) 00:08:31.824 6099.889 - 6125.095: 0.0232% ( 3) 00:08:31.824 6125.095 - 6150.302: 0.0349% ( 2) 00:08:31.824 6150.302 - 6175.508: 0.0581% ( 4) 00:08:31.824 6175.508 - 6200.714: 0.0871% ( 5) 00:08:31.824 6200.714 - 6225.920: 0.1452% ( 10) 00:08:31.824 6225.920 - 6251.126: 0.2498% ( 18) 00:08:31.824 6251.126 - 6276.332: 0.3834% ( 23) 00:08:31.824 6276.332 - 6301.538: 0.5518% ( 29) 00:08:31.824 6301.538 - 6326.745: 0.6970% ( 25) 00:08:31.824 6326.745 - 6351.951: 0.8713% ( 30) 00:08:31.824 6351.951 - 6377.157: 1.3302% ( 79) 00:08:31.824 6377.157 - 6402.363: 1.7251% ( 68) 00:08:31.824 6402.363 - 6427.569: 2.2131% ( 84) 00:08:31.824 6427.569 - 6452.775: 2.7823% ( 98) 00:08:31.824 6452.775 - 6503.188: 3.9789% ( 206) 00:08:31.824 6503.188 - 6553.600: 5.2625% ( 221) 00:08:31.824 6553.600 - 6604.012: 7.3652% ( 362) 00:08:31.824 6604.012 - 6654.425: 10.0139% ( 456) 00:08:31.824 6654.425 - 6704.837: 13.6733% ( 630) 00:08:31.824 6704.837 - 6755.249: 17.7161% ( 696) 00:08:31.824 6755.249 - 6805.662: 21.6078% ( 670) 00:08:31.824 6805.662 - 6856.074: 26.8936% ( 910) 00:08:31.824 6856.074 - 6906.486: 32.5221% ( 969) 00:08:31.824 6906.486 - 6956.898: 37.4013% ( 840) 00:08:31.824 6956.898 - 7007.311: 41.9552% ( 784) 00:08:31.824 7007.311 - 7057.723: 46.5439% ( 790) 00:08:31.824 7057.723 - 7108.135: 52.0678% ( 951) 00:08:31.824 7108.135 - 7158.548: 56.3371% ( 735) 00:08:31.824 7158.548 - 7208.960: 59.7467% ( 587) 00:08:31.824 7208.960 - 7259.372: 63.2783% ( 608) 00:08:31.824 7259.372 - 7309.785: 66.1478% ( 494) 00:08:31.825 7309.785 - 7360.197: 68.5641% ( 416) 00:08:31.825 7360.197 - 7410.609: 70.3299% ( 304) 00:08:31.825 7410.609 - 7461.022: 72.2990% ( 339) 00:08:31.825 7461.022 - 7511.434: 73.6234% ( 228) 00:08:31.825 7511.434 - 7561.846: 74.7096% ( 187) 00:08:31.825 7561.846 - 7612.258: 76.1733% ( 252) 00:08:31.825 7612.258 - 7662.671: 77.6545% ( 255) 00:08:31.825 7662.671 - 7713.083: 78.9963% ( 231) 00:08:31.825 7713.083 - 7763.495: 80.2393% ( 214) 00:08:31.825 7763.495 - 7813.908: 81.3197% ( 186) 00:08:31.825 7813.908 - 7864.320: 82.0516% ( 126) 00:08:31.825 7864.320 - 7914.732: 83.0158% ( 166) 00:08:31.825 7914.732 - 7965.145: 84.3924% ( 237) 00:08:31.825 7965.145 - 8015.557: 85.4380% ( 180) 00:08:31.825 8015.557 - 8065.969: 86.2512% ( 140) 00:08:31.825 8065.969 - 8116.382: 87.1980% ( 163) 00:08:31.825 8116.382 - 8166.794: 87.9473% ( 129) 00:08:31.825 8166.794 - 8217.206: 88.9638% ( 175) 00:08:31.825 8217.206 - 8267.618: 89.4691% ( 87) 00:08:31.825 8267.618 - 8318.031: 89.8699% ( 69) 00:08:31.825 8318.031 - 8368.443: 90.5204% ( 112) 00:08:31.825 8368.443 - 8418.855: 91.0723% ( 95) 00:08:31.825 8418.855 - 8469.268: 91.5137% ( 76) 00:08:31.825 8469.268 - 8519.680: 91.8622% ( 60) 00:08:31.825 8519.680 - 8570.092: 92.1933% ( 57) 00:08:31.825 8570.092 - 8620.505: 92.6464% ( 78) 00:08:31.825 8620.505 - 8670.917: 93.0530% ( 70) 00:08:31.825 8670.917 - 8721.329: 93.4421% ( 67) 00:08:31.825 8721.329 - 8771.742: 93.8662% ( 73) 00:08:31.825 8771.742 - 8822.154: 94.2147% ( 60) 00:08:31.825 8822.154 - 8872.566: 94.5283% ( 54) 00:08:31.825 8872.566 - 8922.978: 94.8246% ( 51) 00:08:31.825 8922.978 - 8973.391: 94.9930% ( 29) 00:08:31.825 8973.391 - 9023.803: 95.1789% ( 32) 00:08:31.825 9023.803 - 9074.215: 95.4054% ( 39) 00:08:31.825 9074.215 - 9124.628: 95.6436% ( 41) 00:08:31.825 9124.628 - 9175.040: 95.7598% ( 20) 00:08:31.825 9175.040 - 9225.452: 96.0270% ( 46) 00:08:31.825 9225.452 - 9275.865: 96.1896% ( 28) 00:08:31.825 9275.865 - 9326.277: 96.2767% ( 15) 00:08:31.825 9326.277 - 9376.689: 96.3464% ( 12) 00:08:31.825 9376.689 - 9427.102: 96.4452% ( 17) 00:08:31.825 9427.102 - 9477.514: 96.5381% ( 16) 00:08:31.825 9477.514 - 9527.926: 96.6485% ( 19) 00:08:31.825 9527.926 - 9578.338: 96.7240% ( 13) 00:08:31.825 9578.338 - 9628.751: 96.8169% ( 16) 00:08:31.825 9628.751 - 9679.163: 97.0318% ( 37) 00:08:31.825 9679.163 - 9729.575: 97.1015% ( 12) 00:08:31.825 9729.575 - 9779.988: 97.1829% ( 14) 00:08:31.825 9779.988 - 9830.400: 97.2700% ( 15) 00:08:31.825 9830.400 - 9880.812: 97.3687% ( 17) 00:08:31.825 9880.812 - 9931.225: 97.4559% ( 15) 00:08:31.825 9931.225 - 9981.637: 97.5372% ( 14) 00:08:31.825 9981.637 - 10032.049: 97.7753% ( 41) 00:08:31.825 10032.049 - 10082.462: 98.0251% ( 43) 00:08:31.825 10082.462 - 10132.874: 98.1587% ( 23) 00:08:31.825 10132.874 - 10183.286: 98.2458% ( 15) 00:08:31.825 10183.286 - 10233.698: 98.3155% ( 12) 00:08:31.825 10233.698 - 10284.111: 98.3562% ( 7) 00:08:31.825 10284.111 - 10334.523: 98.3968% ( 7) 00:08:31.825 10334.523 - 10384.935: 98.5072% ( 19) 00:08:31.825 10384.935 - 10435.348: 98.6350% ( 22) 00:08:31.825 10435.348 - 10485.760: 98.6582% ( 4) 00:08:31.825 10485.760 - 10536.172: 98.6815% ( 4) 00:08:31.825 10536.172 - 10586.585: 98.7105% ( 5) 00:08:31.825 10586.585 - 10636.997: 98.7337% ( 4) 00:08:31.825 10636.997 - 10687.409: 98.7628% ( 5) 00:08:31.825 10687.409 - 10737.822: 98.7802% ( 3) 00:08:31.825 10737.822 - 10788.234: 98.8151% ( 6) 00:08:31.825 10788.234 - 10838.646: 98.8557% ( 7) 00:08:31.825 10838.646 - 10889.058: 98.8906% ( 6) 00:08:31.825 10889.058 - 10939.471: 98.9254% ( 6) 00:08:31.825 10939.471 - 10989.883: 98.9487% ( 4) 00:08:31.825 10989.883 - 11040.295: 98.9719% ( 4) 00:08:31.825 11040.295 - 11090.708: 99.0300% ( 10) 00:08:31.825 11090.708 - 11141.120: 99.0881% ( 10) 00:08:31.825 11141.120 - 11191.532: 99.1520% ( 11) 00:08:31.825 11191.532 - 11241.945: 99.1926% ( 7) 00:08:31.825 11241.945 - 11292.357: 99.1984% ( 1) 00:08:31.825 11292.357 - 11342.769: 99.2100% ( 2) 00:08:31.825 11342.769 - 11393.182: 99.2158% ( 1) 00:08:31.825 11393.182 - 11443.594: 99.2275% ( 2) 00:08:31.825 11443.594 - 11494.006: 99.2333% ( 1) 00:08:31.825 11494.006 - 11544.418: 99.2449% ( 2) 00:08:31.825 11544.418 - 11594.831: 99.2507% ( 1) 00:08:31.825 11594.831 - 11645.243: 99.2565% ( 1) 00:08:31.825 15829.465 - 15930.289: 99.2623% ( 1) 00:08:31.825 15930.289 - 16031.114: 99.2855% ( 4) 00:08:31.825 16031.114 - 16131.938: 99.3088% ( 4) 00:08:31.825 16131.938 - 16232.763: 99.3378% ( 5) 00:08:31.825 16232.763 - 16333.588: 99.3611% ( 4) 00:08:31.825 16333.588 - 16434.412: 99.3901% ( 5) 00:08:31.825 16434.412 - 16535.237: 99.4075% ( 3) 00:08:31.825 16535.237 - 16636.062: 99.4250% ( 3) 00:08:31.825 16636.062 - 16736.886: 99.4308% ( 1) 00:08:31.825 16736.886 - 16837.711: 99.4482% ( 3) 00:08:31.825 16837.711 - 16938.535: 99.4656% ( 3) 00:08:31.825 16938.535 - 17039.360: 99.4830% ( 3) 00:08:31.825 17039.360 - 17140.185: 99.5005% ( 3) 00:08:31.825 17140.185 - 17241.009: 99.5179% ( 3) 00:08:31.825 17241.009 - 17341.834: 99.5353% ( 3) 00:08:31.825 17341.834 - 17442.658: 99.5527% ( 3) 00:08:31.825 17442.658 - 17543.483: 99.5702% ( 3) 00:08:31.825 17543.483 - 17644.308: 99.5876% ( 3) 00:08:31.825 17644.308 - 17745.132: 99.6050% ( 3) 00:08:31.825 17745.132 - 17845.957: 99.6283% ( 4) 00:08:31.825 20568.222 - 20669.046: 99.6399% ( 2) 00:08:31.825 20971.520 - 21072.345: 99.6457% ( 1) 00:08:31.825 21173.169 - 21273.994: 99.6515% ( 1) 00:08:31.825 21374.818 - 21475.643: 99.6573% ( 1) 00:08:31.825 21475.643 - 21576.468: 99.6863% ( 5) 00:08:31.825 21878.942 - 21979.766: 99.7038% ( 3) 00:08:31.825 21979.766 - 22080.591: 99.7270% ( 4) 00:08:31.825 22080.591 - 22181.415: 99.7502% ( 4) 00:08:31.825 22181.415 - 22282.240: 99.7735% ( 4) 00:08:31.825 22282.240 - 22383.065: 99.7967% ( 4) 00:08:31.825 22383.065 - 22483.889: 99.8199% ( 4) 00:08:31.825 22483.889 - 22584.714: 99.8432% ( 4) 00:08:31.825 22584.714 - 22685.538: 99.8664% ( 4) 00:08:31.825 22685.538 - 22786.363: 99.8896% ( 4) 00:08:31.825 22786.363 - 22887.188: 99.9129% ( 4) 00:08:31.825 22887.188 - 22988.012: 99.9361% ( 4) 00:08:31.825 22988.012 - 23088.837: 99.9593% ( 4) 00:08:31.825 23088.837 - 23189.662: 99.9826% ( 4) 00:08:31.825 23189.662 - 23290.486: 100.0000% ( 3) 00:08:31.825 00:08:31.825 15:56:29 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:31.825 00:08:31.825 real 0m2.503s 00:08:31.825 user 0m2.205s 00:08:31.825 sys 0m0.195s 00:08:31.825 15:56:29 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.825 15:56:29 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 ************************************ 00:08:31.825 END TEST nvme_perf 00:08:31.825 ************************************ 00:08:31.825 15:56:29 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:31.825 15:56:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:31.825 15:56:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.825 15:56:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 ************************************ 00:08:31.825 START TEST nvme_hello_world 00:08:31.825 ************************************ 00:08:31.825 15:56:29 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:31.825 Initializing NVMe Controllers 00:08:31.825 Attached to 0000:00:10.0 00:08:31.825 Namespace ID: 1 size: 6GB 00:08:31.825 Attached to 0000:00:11.0 00:08:31.825 Namespace ID: 1 size: 5GB 00:08:31.825 Attached to 0000:00:13.0 00:08:31.825 Namespace ID: 1 size: 1GB 00:08:31.825 Attached to 0000:00:12.0 00:08:31.825 Namespace ID: 1 size: 4GB 00:08:31.825 Namespace ID: 2 size: 4GB 00:08:31.825 Namespace ID: 3 size: 4GB 00:08:31.825 Initialization complete. 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 INFO: using host memory buffer for IO 00:08:31.825 Hello world! 00:08:31.825 00:08:31.825 real 0m0.231s 00:08:31.826 user 0m0.090s 00:08:31.826 sys 0m0.093s 00:08:31.826 15:56:29 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.826 15:56:29 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:31.826 ************************************ 00:08:31.826 END TEST nvme_hello_world 00:08:31.826 ************************************ 00:08:31.826 15:56:29 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:31.826 15:56:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.826 15:56:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.826 15:56:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.826 ************************************ 00:08:31.826 START TEST nvme_sgl 00:08:31.826 ************************************ 00:08:31.826 15:56:29 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:32.083 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:32.083 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:32.083 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:32.083 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:32.083 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:32.083 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:32.083 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:32.083 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:32.083 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:32.083 NVMe Readv/Writev Request test 00:08:32.083 Attached to 0000:00:10.0 00:08:32.083 Attached to 0000:00:11.0 00:08:32.083 Attached to 0000:00:13.0 00:08:32.083 Attached to 0000:00:12.0 00:08:32.083 0000:00:10.0: build_io_request_2 test passed 00:08:32.083 0000:00:10.0: build_io_request_4 test passed 00:08:32.083 0000:00:10.0: build_io_request_5 test passed 00:08:32.083 0000:00:10.0: build_io_request_6 test passed 00:08:32.083 0000:00:10.0: build_io_request_7 test passed 00:08:32.083 0000:00:10.0: build_io_request_10 test passed 00:08:32.083 0000:00:11.0: build_io_request_2 test passed 00:08:32.083 0000:00:11.0: build_io_request_4 test passed 00:08:32.083 0000:00:11.0: build_io_request_5 test passed 00:08:32.083 0000:00:11.0: build_io_request_6 test passed 00:08:32.083 0000:00:11.0: build_io_request_7 test passed 00:08:32.083 0000:00:11.0: build_io_request_10 test passed 00:08:32.083 Cleaning up... 00:08:32.083 00:08:32.083 real 0m0.286s 00:08:32.083 user 0m0.135s 00:08:32.083 sys 0m0.107s 00:08:32.083 15:56:30 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.083 15:56:30 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:32.083 ************************************ 00:08:32.083 END TEST nvme_sgl 00:08:32.083 ************************************ 00:08:32.083 15:56:30 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:32.083 15:56:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.083 15:56:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.083 15:56:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.083 ************************************ 00:08:32.083 START TEST nvme_e2edp 00:08:32.083 ************************************ 00:08:32.083 15:56:30 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:32.339 NVMe Write/Read with End-to-End data protection test 00:08:32.339 Attached to 0000:00:10.0 00:08:32.339 Attached to 0000:00:11.0 00:08:32.339 Attached to 0000:00:13.0 00:08:32.340 Attached to 0000:00:12.0 00:08:32.340 Cleaning up... 00:08:32.340 00:08:32.340 real 0m0.207s 00:08:32.340 user 0m0.060s 00:08:32.340 sys 0m0.102s 00:08:32.340 15:56:30 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.340 15:56:30 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:32.340 ************************************ 00:08:32.340 END TEST nvme_e2edp 00:08:32.340 ************************************ 00:08:32.340 15:56:30 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:32.340 15:56:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.340 15:56:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.340 15:56:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.340 ************************************ 00:08:32.340 START TEST nvme_reserve 00:08:32.340 ************************************ 00:08:32.340 15:56:30 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:32.605 ===================================================== 00:08:32.605 NVMe Controller at PCI bus 0, device 16, function 0 00:08:32.605 ===================================================== 00:08:32.605 Reservations: Not Supported 00:08:32.605 ===================================================== 00:08:32.605 NVMe Controller at PCI bus 0, device 17, function 0 00:08:32.605 ===================================================== 00:08:32.605 Reservations: Not Supported 00:08:32.605 ===================================================== 00:08:32.605 NVMe Controller at PCI bus 0, device 19, function 0 00:08:32.605 ===================================================== 00:08:32.605 Reservations: Not Supported 00:08:32.605 ===================================================== 00:08:32.605 NVMe Controller at PCI bus 0, device 18, function 0 00:08:32.605 ===================================================== 00:08:32.605 Reservations: Not Supported 00:08:32.605 Reservation test passed 00:08:32.605 00:08:32.605 real 0m0.219s 00:08:32.605 user 0m0.076s 00:08:32.605 sys 0m0.096s 00:08:32.605 15:56:30 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.605 15:56:30 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:32.605 ************************************ 00:08:32.605 END TEST nvme_reserve 00:08:32.605 ************************************ 00:08:32.606 15:56:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:32.606 15:56:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.606 15:56:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.606 15:56:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.606 ************************************ 00:08:32.606 START TEST nvme_err_injection 00:08:32.606 ************************************ 00:08:32.606 15:56:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:32.864 NVMe Error Injection test 00:08:32.864 Attached to 0000:00:10.0 00:08:32.864 Attached to 0000:00:11.0 00:08:32.864 Attached to 0000:00:13.0 00:08:32.864 Attached to 0000:00:12.0 00:08:32.864 0000:00:13.0: get features failed as expected 00:08:32.864 0000:00:12.0: get features failed as expected 00:08:32.864 0000:00:10.0: get features failed as expected 00:08:32.864 0000:00:11.0: get features failed as expected 00:08:32.864 0000:00:10.0: get features successfully as expected 00:08:32.864 0000:00:11.0: get features successfully as expected 00:08:32.864 0000:00:13.0: get features successfully as expected 00:08:32.864 0000:00:12.0: get features successfully as expected 00:08:32.864 0000:00:10.0: read failed as expected 00:08:32.864 0000:00:11.0: read failed as expected 00:08:32.864 0000:00:13.0: read failed as expected 00:08:32.864 0000:00:12.0: read failed as expected 00:08:32.864 0000:00:10.0: read successfully as expected 00:08:32.864 0000:00:11.0: read successfully as expected 00:08:32.864 0000:00:13.0: read successfully as expected 00:08:32.864 0000:00:12.0: read successfully as expected 00:08:32.864 Cleaning up... 00:08:32.864 00:08:32.864 real 0m0.275s 00:08:32.864 user 0m0.096s 00:08:32.864 sys 0m0.129s 00:08:32.864 15:56:31 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.864 15:56:31 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:32.864 ************************************ 00:08:32.864 END TEST nvme_err_injection 00:08:32.864 ************************************ 00:08:32.864 15:56:31 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:32.864 15:56:31 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:32.864 15:56:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.864 15:56:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.121 ************************************ 00:08:33.121 START TEST nvme_overhead 00:08:33.121 ************************************ 00:08:33.121 15:56:31 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:34.576 Initializing NVMe Controllers 00:08:34.576 Attached to 0000:00:10.0 00:08:34.576 Attached to 0000:00:11.0 00:08:34.576 Attached to 0000:00:13.0 00:08:34.576 Attached to 0000:00:12.0 00:08:34.576 Initialization complete. Launching workers. 00:08:34.576 submit (in ns) avg, min, max = 11603.4, 10441.5, 280787.7 00:08:34.576 complete (in ns) avg, min, max = 7754.4, 7163.1, 450850.0 00:08:34.576 00:08:34.576 Submit histogram 00:08:34.576 ================ 00:08:34.576 Range in us Cumulative Count 00:08:34.576 10.437 - 10.486: 0.0121% ( 2) 00:08:34.576 10.585 - 10.634: 0.0182% ( 1) 00:08:34.576 10.683 - 10.732: 0.0243% ( 1) 00:08:34.576 10.782 - 10.831: 0.0304% ( 1) 00:08:34.576 10.831 - 10.880: 0.0547% ( 4) 00:08:34.576 10.880 - 10.929: 0.2429% ( 31) 00:08:34.576 10.929 - 10.978: 1.2692% ( 169) 00:08:34.576 10.978 - 11.028: 4.8582% ( 591) 00:08:34.576 11.028 - 11.077: 13.2082% ( 1375) 00:08:34.576 11.077 - 11.126: 27.8436% ( 2410) 00:08:34.576 11.126 - 11.175: 44.5254% ( 2747) 00:08:34.576 11.175 - 11.225: 57.8004% ( 2186) 00:08:34.576 11.225 - 11.274: 66.7335% ( 1471) 00:08:34.576 11.274 - 11.323: 72.4236% ( 937) 00:08:34.576 11.323 - 11.372: 75.7515% ( 548) 00:08:34.576 11.372 - 11.422: 77.8466% ( 345) 00:08:34.576 11.422 - 11.471: 79.1583% ( 216) 00:08:34.576 11.471 - 11.520: 80.0510% ( 147) 00:08:34.576 11.520 - 11.569: 81.0044% ( 157) 00:08:34.576 11.569 - 11.618: 81.9153% ( 150) 00:08:34.576 11.618 - 11.668: 83.0449% ( 186) 00:08:34.576 11.668 - 11.717: 84.3991% ( 223) 00:08:34.576 11.717 - 11.766: 85.5529% ( 190) 00:08:34.576 11.766 - 11.815: 86.5792% ( 169) 00:08:34.576 11.815 - 11.865: 87.3140% ( 121) 00:08:34.576 11.865 - 11.914: 88.2006% ( 146) 00:08:34.576 11.914 - 11.963: 88.8747% ( 111) 00:08:34.576 11.963 - 12.012: 89.5124% ( 105) 00:08:34.576 12.012 - 12.062: 90.1075% ( 98) 00:08:34.576 12.062 - 12.111: 90.6662% ( 92) 00:08:34.576 12.111 - 12.160: 91.2006% ( 88) 00:08:34.576 12.160 - 12.209: 91.7411% ( 89) 00:08:34.576 12.209 - 12.258: 92.1115% ( 61) 00:08:34.576 12.258 - 12.308: 92.5184% ( 67) 00:08:34.576 12.308 - 12.357: 92.7309% ( 35) 00:08:34.576 12.357 - 12.406: 92.9010% ( 28) 00:08:34.576 12.406 - 12.455: 93.0042% ( 17) 00:08:34.576 12.455 - 12.505: 93.1135% ( 18) 00:08:34.576 12.505 - 12.554: 93.2228% ( 18) 00:08:34.576 12.554 - 12.603: 93.2896% ( 11) 00:08:34.576 12.603 - 12.702: 93.3807% ( 15) 00:08:34.576 12.702 - 12.800: 93.4293% ( 8) 00:08:34.576 12.800 - 12.898: 93.5022% ( 12) 00:08:34.576 12.898 - 12.997: 93.5872% ( 14) 00:08:34.576 12.997 - 13.095: 93.6358% ( 8) 00:08:34.576 13.095 - 13.194: 93.7390% ( 17) 00:08:34.576 13.194 - 13.292: 93.8301% ( 15) 00:08:34.576 13.292 - 13.391: 93.9515% ( 20) 00:08:34.576 13.391 - 13.489: 94.0487% ( 16) 00:08:34.576 13.489 - 13.588: 94.1702% ( 20) 00:08:34.576 13.588 - 13.686: 94.2491% ( 13) 00:08:34.576 13.686 - 13.785: 94.3645% ( 19) 00:08:34.576 13.785 - 13.883: 94.4799% ( 19) 00:08:34.576 13.883 - 13.982: 94.5710% ( 15) 00:08:34.576 13.982 - 14.080: 94.6803% ( 18) 00:08:34.576 14.080 - 14.178: 94.8199% ( 23) 00:08:34.576 14.178 - 14.277: 94.9475% ( 21) 00:08:34.576 14.277 - 14.375: 95.0446% ( 16) 00:08:34.576 14.375 - 14.474: 95.1782% ( 22) 00:08:34.576 14.474 - 14.572: 95.3240% ( 24) 00:08:34.576 14.572 - 14.671: 95.4758% ( 25) 00:08:34.576 14.671 - 14.769: 95.6580% ( 30) 00:08:34.576 14.769 - 14.868: 95.8219% ( 27) 00:08:34.576 14.868 - 14.966: 96.0223% ( 33) 00:08:34.576 14.966 - 15.065: 96.2592% ( 39) 00:08:34.576 15.065 - 15.163: 96.5507% ( 48) 00:08:34.576 15.163 - 15.262: 96.7572% ( 34) 00:08:34.576 15.262 - 15.360: 96.9515% ( 32) 00:08:34.576 15.360 - 15.458: 97.1397% ( 31) 00:08:34.576 15.458 - 15.557: 97.2551% ( 19) 00:08:34.576 15.557 - 15.655: 97.3887% ( 22) 00:08:34.576 15.655 - 15.754: 97.5466% ( 26) 00:08:34.576 15.754 - 15.852: 97.7106% ( 27) 00:08:34.576 15.852 - 15.951: 97.8381% ( 21) 00:08:34.576 15.951 - 16.049: 97.9049% ( 11) 00:08:34.576 16.049 - 16.148: 97.9717% ( 11) 00:08:34.576 16.148 - 16.246: 98.0446% ( 12) 00:08:34.576 16.246 - 16.345: 98.0810% ( 6) 00:08:34.576 16.345 - 16.443: 98.1296% ( 8) 00:08:34.576 16.443 - 16.542: 98.1842% ( 9) 00:08:34.576 16.542 - 16.640: 98.2268% ( 7) 00:08:34.576 16.640 - 16.738: 98.3300% ( 17) 00:08:34.576 16.738 - 16.837: 98.4211% ( 15) 00:08:34.576 16.837 - 16.935: 98.4575% ( 6) 00:08:34.576 16.935 - 17.034: 98.4940% ( 6) 00:08:34.576 17.034 - 17.132: 98.5304% ( 6) 00:08:34.577 17.132 - 17.231: 98.5850% ( 9) 00:08:34.577 17.231 - 17.329: 98.6154% ( 5) 00:08:34.577 17.329 - 17.428: 98.7308% ( 19) 00:08:34.577 17.428 - 17.526: 98.7915% ( 10) 00:08:34.577 17.526 - 17.625: 98.8462% ( 9) 00:08:34.577 17.625 - 17.723: 98.9130% ( 11) 00:08:34.577 17.723 - 17.822: 98.9859% ( 12) 00:08:34.577 17.822 - 17.920: 99.0527% ( 11) 00:08:34.577 17.920 - 18.018: 99.1316% ( 13) 00:08:34.577 18.018 - 18.117: 99.1620% ( 5) 00:08:34.577 18.117 - 18.215: 99.2105% ( 8) 00:08:34.577 18.215 - 18.314: 99.2591% ( 8) 00:08:34.577 18.314 - 18.412: 99.2834% ( 4) 00:08:34.577 18.412 - 18.511: 99.3563% ( 12) 00:08:34.577 18.511 - 18.609: 99.3867% ( 5) 00:08:34.577 18.609 - 18.708: 99.4292% ( 7) 00:08:34.577 18.708 - 18.806: 99.4474% ( 3) 00:08:34.577 18.806 - 18.905: 99.4960% ( 8) 00:08:34.577 18.905 - 19.003: 99.5385% ( 7) 00:08:34.577 19.003 - 19.102: 99.5506% ( 2) 00:08:34.577 19.102 - 19.200: 99.5749% ( 4) 00:08:34.577 19.200 - 19.298: 99.6053% ( 5) 00:08:34.577 19.298 - 19.397: 99.6174% ( 2) 00:08:34.577 19.397 - 19.495: 99.6417% ( 4) 00:08:34.577 19.495 - 19.594: 99.6478% ( 1) 00:08:34.577 19.594 - 19.692: 99.6660% ( 3) 00:08:34.577 19.692 - 19.791: 99.6781% ( 2) 00:08:34.577 19.791 - 19.889: 99.6903% ( 2) 00:08:34.577 19.889 - 19.988: 99.6964% ( 1) 00:08:34.577 20.185 - 20.283: 99.7085% ( 2) 00:08:34.577 20.382 - 20.480: 99.7146% ( 1) 00:08:34.577 20.480 - 20.578: 99.7207% ( 1) 00:08:34.577 20.578 - 20.677: 99.7328% ( 2) 00:08:34.577 20.677 - 20.775: 99.7389% ( 1) 00:08:34.577 20.775 - 20.874: 99.7510% ( 2) 00:08:34.577 20.874 - 20.972: 99.7632% ( 2) 00:08:34.577 20.972 - 21.071: 99.7692% ( 1) 00:08:34.577 21.169 - 21.268: 99.7753% ( 1) 00:08:34.577 21.366 - 21.465: 99.7875% ( 2) 00:08:34.577 21.465 - 21.563: 99.8057% ( 3) 00:08:34.577 21.563 - 21.662: 99.8178% ( 2) 00:08:34.577 21.662 - 21.760: 99.8239% ( 1) 00:08:34.577 21.858 - 21.957: 99.8300% ( 1) 00:08:34.577 21.957 - 22.055: 99.8360% ( 1) 00:08:34.577 22.154 - 22.252: 99.8421% ( 1) 00:08:34.577 22.252 - 22.351: 99.8482% ( 1) 00:08:34.577 22.449 - 22.548: 99.8543% ( 1) 00:08:34.577 23.040 - 23.138: 99.8603% ( 1) 00:08:34.577 23.138 - 23.237: 99.8664% ( 1) 00:08:34.577 23.237 - 23.335: 99.8725% ( 1) 00:08:34.577 23.532 - 23.631: 99.8785% ( 1) 00:08:34.577 23.631 - 23.729: 99.8846% ( 1) 00:08:34.577 23.926 - 24.025: 99.8907% ( 1) 00:08:34.577 24.025 - 24.123: 99.8968% ( 1) 00:08:34.577 24.320 - 24.418: 99.9028% ( 1) 00:08:34.577 24.911 - 25.009: 99.9089% ( 1) 00:08:34.577 25.403 - 25.600: 99.9150% ( 1) 00:08:34.577 26.782 - 26.978: 99.9271% ( 2) 00:08:34.577 27.372 - 27.569: 99.9332% ( 1) 00:08:34.577 28.357 - 28.554: 99.9453% ( 2) 00:08:34.577 30.326 - 30.523: 99.9514% ( 1) 00:08:34.577 33.280 - 33.477: 99.9575% ( 1) 00:08:34.577 34.068 - 34.265: 99.9636% ( 1) 00:08:34.577 35.052 - 35.249: 99.9696% ( 1) 00:08:34.577 51.988 - 52.382: 99.9757% ( 1) 00:08:34.577 53.563 - 53.957: 99.9818% ( 1) 00:08:34.577 87.434 - 87.828: 99.9879% ( 1) 00:08:34.577 89.403 - 89.797: 99.9939% ( 1) 00:08:34.577 280.418 - 281.994: 100.0000% ( 1) 00:08:34.577 00:08:34.577 Complete histogram 00:08:34.577 ================== 00:08:34.577 Range in us Cumulative Count 00:08:34.577 7.138 - 7.188: 0.0121% ( 2) 00:08:34.577 7.188 - 7.237: 0.1032% ( 15) 00:08:34.577 7.237 - 7.286: 1.4453% ( 221) 00:08:34.577 7.286 - 7.335: 9.8804% ( 1389) 00:08:34.577 7.335 - 7.385: 29.7626% ( 3274) 00:08:34.577 7.385 - 7.434: 52.1224% ( 3682) 00:08:34.577 7.434 - 7.483: 69.0350% ( 2785) 00:08:34.577 7.483 - 7.532: 78.9640% ( 1635) 00:08:34.577 7.532 - 7.582: 84.3930% ( 894) 00:08:34.577 7.582 - 7.631: 87.6237% ( 532) 00:08:34.577 7.631 - 7.680: 89.4820% ( 306) 00:08:34.577 7.680 - 7.729: 90.5083% ( 169) 00:08:34.577 7.729 - 7.778: 91.1034% ( 98) 00:08:34.577 7.778 - 7.828: 91.4981% ( 65) 00:08:34.577 7.828 - 7.877: 91.9657% ( 77) 00:08:34.577 7.877 - 7.926: 92.3605% ( 65) 00:08:34.577 7.926 - 7.975: 92.6520% ( 48) 00:08:34.577 7.975 - 8.025: 92.8827% ( 38) 00:08:34.577 8.025 - 8.074: 93.1074% ( 37) 00:08:34.577 8.074 - 8.123: 93.3382% ( 38) 00:08:34.577 8.123 - 8.172: 93.4961% ( 26) 00:08:34.577 8.172 - 8.222: 93.6418% ( 24) 00:08:34.577 8.222 - 8.271: 93.7754% ( 22) 00:08:34.577 8.271 - 8.320: 93.9151% ( 23) 00:08:34.577 8.320 - 8.369: 94.0730% ( 26) 00:08:34.577 8.369 - 8.418: 94.1580% ( 14) 00:08:34.577 8.418 - 8.468: 94.2552% ( 16) 00:08:34.577 8.468 - 8.517: 94.3402% ( 14) 00:08:34.577 8.517 - 8.566: 94.4131% ( 12) 00:08:34.577 8.566 - 8.615: 94.4677% ( 9) 00:08:34.577 8.615 - 8.665: 94.4981% ( 5) 00:08:34.577 8.665 - 8.714: 94.5831% ( 14) 00:08:34.577 8.714 - 8.763: 94.6378% ( 9) 00:08:34.577 8.763 - 8.812: 94.6621% ( 4) 00:08:34.577 8.812 - 8.862: 94.6924% ( 5) 00:08:34.577 8.862 - 8.911: 94.7046% ( 2) 00:08:34.577 8.911 - 8.960: 94.7228% ( 3) 00:08:34.577 8.960 - 9.009: 94.7531% ( 5) 00:08:34.577 9.009 - 9.058: 94.7592% ( 1) 00:08:34.577 9.058 - 9.108: 94.7653% ( 1) 00:08:34.577 9.108 - 9.157: 94.7957% ( 5) 00:08:34.577 9.157 - 9.206: 94.8199% ( 4) 00:08:34.577 9.206 - 9.255: 94.8321% ( 2) 00:08:34.577 9.255 - 9.305: 94.8564% ( 4) 00:08:34.577 9.305 - 9.354: 94.8989% ( 7) 00:08:34.577 9.354 - 9.403: 94.9353% ( 6) 00:08:34.577 9.403 - 9.452: 94.9475% ( 2) 00:08:34.577 9.452 - 9.502: 94.9718% ( 4) 00:08:34.577 9.502 - 9.551: 94.9900% ( 3) 00:08:34.577 9.551 - 9.600: 95.0203% ( 5) 00:08:34.577 9.600 - 9.649: 95.0568% ( 6) 00:08:34.577 9.649 - 9.698: 95.0811% ( 4) 00:08:34.577 9.698 - 9.748: 95.0871% ( 1) 00:08:34.577 9.748 - 9.797: 95.0932% ( 1) 00:08:34.577 9.797 - 9.846: 95.1357% ( 7) 00:08:34.577 9.846 - 9.895: 95.1965% ( 10) 00:08:34.577 9.895 - 9.945: 95.2147% ( 3) 00:08:34.577 9.945 - 9.994: 95.2511% ( 6) 00:08:34.577 9.994 - 10.043: 95.3058% ( 9) 00:08:34.577 10.043 - 10.092: 95.3786% ( 12) 00:08:34.577 10.092 - 10.142: 95.4637% ( 14) 00:08:34.577 10.142 - 10.191: 95.5547% ( 15) 00:08:34.577 10.191 - 10.240: 95.6701% ( 19) 00:08:34.577 10.240 - 10.289: 95.8219% ( 25) 00:08:34.577 10.289 - 10.338: 95.8887% ( 11) 00:08:34.577 10.338 - 10.388: 96.0041% ( 19) 00:08:34.577 10.388 - 10.437: 96.1013% ( 16) 00:08:34.577 10.437 - 10.486: 96.2167% ( 19) 00:08:34.577 10.486 - 10.535: 96.3563% ( 23) 00:08:34.577 10.535 - 10.585: 96.5082% ( 25) 00:08:34.577 10.585 - 10.634: 96.6175% ( 18) 00:08:34.577 10.634 - 10.683: 96.6964% ( 13) 00:08:34.577 10.683 - 10.732: 96.7814% ( 14) 00:08:34.577 10.732 - 10.782: 96.8847% ( 17) 00:08:34.577 10.782 - 10.831: 96.9818% ( 16) 00:08:34.577 10.831 - 10.880: 97.0608% ( 13) 00:08:34.577 10.880 - 10.929: 97.1701% ( 18) 00:08:34.577 10.929 - 10.978: 97.3098% ( 23) 00:08:34.577 10.978 - 11.028: 97.3705% ( 10) 00:08:34.577 11.028 - 11.077: 97.4373% ( 11) 00:08:34.577 11.077 - 11.126: 97.4980% ( 10) 00:08:34.577 11.126 - 11.175: 97.5527% ( 9) 00:08:34.577 11.175 - 11.225: 97.6620% ( 18) 00:08:34.577 11.225 - 11.274: 97.7470% ( 14) 00:08:34.577 11.274 - 11.323: 97.7956% ( 8) 00:08:34.577 11.323 - 11.372: 97.8563% ( 10) 00:08:34.577 11.372 - 11.422: 97.8928% ( 6) 00:08:34.577 11.422 - 11.471: 97.9535% ( 10) 00:08:34.577 11.471 - 11.520: 98.0142% ( 10) 00:08:34.577 11.520 - 11.569: 98.0749% ( 10) 00:08:34.577 11.569 - 11.618: 98.1114% ( 6) 00:08:34.577 11.618 - 11.668: 98.1357% ( 4) 00:08:34.577 11.668 - 11.717: 98.1782% ( 7) 00:08:34.577 11.717 - 11.766: 98.1842% ( 1) 00:08:34.577 11.766 - 11.815: 98.2146% ( 5) 00:08:34.577 11.815 - 11.865: 98.2268% ( 2) 00:08:34.577 11.865 - 11.914: 98.2571% ( 5) 00:08:34.577 11.914 - 11.963: 98.2875% ( 5) 00:08:34.577 11.963 - 12.012: 98.3057% ( 3) 00:08:34.577 12.012 - 12.062: 98.3178% ( 2) 00:08:34.577 12.062 - 12.111: 98.3300% ( 2) 00:08:34.577 12.111 - 12.160: 98.3482% ( 3) 00:08:34.577 12.209 - 12.258: 98.3543% ( 1) 00:08:34.577 12.258 - 12.308: 98.3725% ( 3) 00:08:34.577 12.308 - 12.357: 98.3786% ( 1) 00:08:34.577 12.357 - 12.406: 98.3907% ( 2) 00:08:34.577 12.554 - 12.603: 98.3968% ( 1) 00:08:34.578 12.702 - 12.800: 98.4150% ( 3) 00:08:34.578 12.800 - 12.898: 98.4211% ( 1) 00:08:34.578 12.898 - 12.997: 98.4332% ( 2) 00:08:34.578 12.997 - 13.095: 98.4879% ( 9) 00:08:34.578 13.095 - 13.194: 98.5304% ( 7) 00:08:34.578 13.194 - 13.292: 98.5850% ( 9) 00:08:34.578 13.292 - 13.391: 98.6093% ( 4) 00:08:34.578 13.391 - 13.489: 98.6579% ( 8) 00:08:34.578 13.489 - 13.588: 98.6761% ( 3) 00:08:34.578 13.588 - 13.686: 98.7126% ( 6) 00:08:34.578 13.686 - 13.785: 98.7733% ( 10) 00:08:34.578 13.785 - 13.883: 98.8097% ( 6) 00:08:34.578 13.883 - 13.982: 98.8522% ( 7) 00:08:34.578 13.982 - 14.080: 98.8948% ( 7) 00:08:34.578 14.080 - 14.178: 98.9433% ( 8) 00:08:34.578 14.178 - 14.277: 99.0223% ( 13) 00:08:34.578 14.277 - 14.375: 99.0466% ( 4) 00:08:34.578 14.375 - 14.474: 99.1195% ( 12) 00:08:34.578 14.474 - 14.572: 99.1620% ( 7) 00:08:34.578 14.572 - 14.671: 99.2227% ( 10) 00:08:34.578 14.671 - 14.769: 99.2773% ( 9) 00:08:34.578 14.769 - 14.868: 99.3199% ( 7) 00:08:34.578 14.868 - 14.966: 99.3563% ( 6) 00:08:34.578 14.966 - 15.065: 99.3806% ( 4) 00:08:34.578 15.065 - 15.163: 99.4109% ( 5) 00:08:34.578 15.163 - 15.262: 99.4413% ( 5) 00:08:34.578 15.262 - 15.360: 99.5081% ( 11) 00:08:34.578 15.360 - 15.458: 99.5324% ( 4) 00:08:34.578 15.458 - 15.557: 99.5445% ( 2) 00:08:34.578 15.557 - 15.655: 99.5749% ( 5) 00:08:34.578 15.655 - 15.754: 99.6174% ( 7) 00:08:34.578 15.754 - 15.852: 99.6356% ( 3) 00:08:34.578 15.852 - 15.951: 99.6539% ( 3) 00:08:34.578 16.345 - 16.443: 99.6721% ( 3) 00:08:34.578 16.542 - 16.640: 99.6781% ( 1) 00:08:34.578 16.640 - 16.738: 99.6903% ( 2) 00:08:34.578 16.738 - 16.837: 99.7085% ( 3) 00:08:34.578 16.837 - 16.935: 99.7146% ( 1) 00:08:34.578 16.935 - 17.034: 99.7207% ( 1) 00:08:34.578 17.034 - 17.132: 99.7267% ( 1) 00:08:34.578 17.231 - 17.329: 99.7328% ( 1) 00:08:34.578 17.723 - 17.822: 99.7389% ( 1) 00:08:34.578 17.822 - 17.920: 99.7449% ( 1) 00:08:34.578 17.920 - 18.018: 99.7571% ( 2) 00:08:34.578 18.215 - 18.314: 99.7632% ( 1) 00:08:34.578 18.314 - 18.412: 99.7692% ( 1) 00:08:34.578 18.412 - 18.511: 99.7753% ( 1) 00:08:34.578 18.511 - 18.609: 99.7935% ( 3) 00:08:34.578 18.609 - 18.708: 99.7996% ( 1) 00:08:34.578 18.708 - 18.806: 99.8117% ( 2) 00:08:34.578 19.003 - 19.102: 99.8178% ( 1) 00:08:34.578 19.200 - 19.298: 99.8239% ( 1) 00:08:34.578 19.298 - 19.397: 99.8300% ( 1) 00:08:34.578 19.495 - 19.594: 99.8360% ( 1) 00:08:34.578 19.594 - 19.692: 99.8421% ( 1) 00:08:34.578 19.692 - 19.791: 99.8482% ( 1) 00:08:34.578 19.791 - 19.889: 99.8603% ( 2) 00:08:34.578 20.677 - 20.775: 99.8664% ( 1) 00:08:34.578 20.775 - 20.874: 99.8725% ( 1) 00:08:34.578 21.268 - 21.366: 99.8785% ( 1) 00:08:34.578 21.366 - 21.465: 99.8846% ( 1) 00:08:34.578 21.563 - 21.662: 99.8907% ( 1) 00:08:34.578 21.760 - 21.858: 99.8968% ( 1) 00:08:34.578 22.252 - 22.351: 99.9028% ( 1) 00:08:34.578 22.351 - 22.449: 99.9089% ( 1) 00:08:34.578 23.237 - 23.335: 99.9150% ( 1) 00:08:34.578 23.532 - 23.631: 99.9271% ( 2) 00:08:34.578 24.320 - 24.418: 99.9332% ( 1) 00:08:34.578 24.517 - 24.615: 99.9393% ( 1) 00:08:34.578 25.600 - 25.797: 99.9453% ( 1) 00:08:34.578 26.388 - 26.585: 99.9514% ( 1) 00:08:34.578 26.782 - 26.978: 99.9575% ( 1) 00:08:34.578 28.554 - 28.751: 99.9636% ( 1) 00:08:34.578 33.083 - 33.280: 99.9696% ( 1) 00:08:34.578 38.597 - 38.794: 99.9757% ( 1) 00:08:34.578 43.126 - 43.323: 99.9818% ( 1) 00:08:34.578 60.652 - 61.046: 99.9879% ( 1) 00:08:34.578 308.775 - 310.351: 99.9939% ( 1) 00:08:34.578 450.560 - 453.711: 100.0000% ( 1) 00:08:34.578 00:08:34.578 00:08:34.578 real 0m1.232s 00:08:34.578 user 0m1.074s 00:08:34.578 sys 0m0.096s 00:08:34.578 15:56:32 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.578 15:56:32 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 ************************************ 00:08:34.578 END TEST nvme_overhead 00:08:34.578 ************************************ 00:08:34.578 15:56:32 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:34.578 15:56:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:34.578 15:56:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.578 15:56:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.578 ************************************ 00:08:34.578 START TEST nvme_arbitration 00:08:34.578 ************************************ 00:08:34.578 15:56:32 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:37.886 Initializing NVMe Controllers 00:08:37.886 Attached to 0000:00:10.0 00:08:37.886 Attached to 0000:00:11.0 00:08:37.886 Attached to 0000:00:13.0 00:08:37.886 Attached to 0000:00:12.0 00:08:37.886 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:37.886 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:37.886 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:37.886 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:37.886 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:37.886 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:37.886 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:37.886 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:37.886 Initialization complete. Launching workers. 00:08:37.886 Starting thread on core 1 with urgent priority queue 00:08:37.886 Starting thread on core 2 with urgent priority queue 00:08:37.886 Starting thread on core 3 with urgent priority queue 00:08:37.886 Starting thread on core 0 with urgent priority queue 00:08:37.886 QEMU NVMe Ctrl (12340 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:08:37.886 QEMU NVMe Ctrl (12342 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:08:37.886 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:37.886 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:37.886 QEMU NVMe Ctrl (12343 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:08:37.886 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:08:37.886 ======================================================== 00:08:37.886 00:08:37.886 00:08:37.886 real 0m3.341s 00:08:37.886 user 0m9.327s 00:08:37.886 sys 0m0.114s 00:08:37.886 15:56:35 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.886 15:56:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 ************************************ 00:08:37.886 END TEST nvme_arbitration 00:08:37.886 ************************************ 00:08:37.886 15:56:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:37.886 15:56:35 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.886 15:56:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.886 15:56:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 ************************************ 00:08:37.886 START TEST nvme_single_aen 00:08:37.886 ************************************ 00:08:37.886 15:56:35 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:37.886 Asynchronous Event Request test 00:08:37.886 Attached to 0000:00:10.0 00:08:37.886 Attached to 0000:00:11.0 00:08:37.886 Attached to 0000:00:13.0 00:08:37.886 Attached to 0000:00:12.0 00:08:37.886 Reset controller to setup AER completions for this process 00:08:37.886 Registering asynchronous event callbacks... 00:08:37.886 Getting orig temperature thresholds of all controllers 00:08:37.886 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.886 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.886 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.886 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.886 Setting all controllers temperature threshold low to trigger AER 00:08:37.886 Waiting for all controllers temperature threshold to be set lower 00:08:37.886 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.886 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:37.886 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.886 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:37.886 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.886 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:37.886 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.886 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:37.886 Waiting for all controllers to trigger AER and reset threshold 00:08:37.886 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.886 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.886 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.886 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.886 Cleaning up... 00:08:37.886 00:08:37.886 real 0m0.250s 00:08:37.886 user 0m0.078s 00:08:37.886 sys 0m0.122s 00:08:37.886 15:56:36 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.886 15:56:36 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 ************************************ 00:08:37.886 END TEST nvme_single_aen 00:08:37.886 ************************************ 00:08:37.886 15:56:36 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:37.886 15:56:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.886 15:56:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.886 15:56:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 ************************************ 00:08:37.886 START TEST nvme_doorbell_aers 00:08:37.886 ************************************ 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:37.886 15:56:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.143 [2024-11-20 15:56:36.339121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:08:48.105 Executing: test_write_invalid_db 00:08:48.105 Waiting for AER completion... 00:08:48.105 Failure: test_write_invalid_db 00:08:48.105 00:08:48.105 Executing: test_invalid_db_write_overflow_sq 00:08:48.105 Waiting for AER completion... 00:08:48.105 Failure: test_invalid_db_write_overflow_sq 00:08:48.105 00:08:48.105 Executing: test_invalid_db_write_overflow_cq 00:08:48.105 Waiting for AER completion... 00:08:48.105 Failure: test_invalid_db_write_overflow_cq 00:08:48.105 00:08:48.105 15:56:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:48.105 15:56:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:48.363 [2024-11-20 15:56:46.358849] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:08:58.326 Executing: test_write_invalid_db 00:08:58.326 Waiting for AER completion... 00:08:58.326 Failure: test_write_invalid_db 00:08:58.326 00:08:58.326 Executing: test_invalid_db_write_overflow_sq 00:08:58.326 Waiting for AER completion... 00:08:58.326 Failure: test_invalid_db_write_overflow_sq 00:08:58.326 00:08:58.326 Executing: test_invalid_db_write_overflow_cq 00:08:58.326 Waiting for AER completion... 00:08:58.326 Failure: test_invalid_db_write_overflow_cq 00:08:58.326 00:08:58.326 15:56:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:58.326 15:56:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:58.326 [2024-11-20 15:56:56.372499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:08.287 Executing: test_write_invalid_db 00:09:08.287 Waiting for AER completion... 00:09:08.287 Failure: test_write_invalid_db 00:09:08.287 00:09:08.287 Executing: test_invalid_db_write_overflow_sq 00:09:08.287 Waiting for AER completion... 00:09:08.287 Failure: test_invalid_db_write_overflow_sq 00:09:08.287 00:09:08.287 Executing: test_invalid_db_write_overflow_cq 00:09:08.287 Waiting for AER completion... 00:09:08.287 Failure: test_invalid_db_write_overflow_cq 00:09:08.287 00:09:08.287 15:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:08.287 15:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:08.287 [2024-11-20 15:57:06.411819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.248 Executing: test_write_invalid_db 00:09:18.248 Waiting for AER completion... 00:09:18.248 Failure: test_write_invalid_db 00:09:18.248 00:09:18.248 Executing: test_invalid_db_write_overflow_sq 00:09:18.248 Waiting for AER completion... 00:09:18.248 Failure: test_invalid_db_write_overflow_sq 00:09:18.248 00:09:18.248 Executing: test_invalid_db_write_overflow_cq 00:09:18.248 Waiting for AER completion... 00:09:18.248 Failure: test_invalid_db_write_overflow_cq 00:09:18.248 00:09:18.248 00:09:18.248 real 0m40.194s 00:09:18.248 user 0m34.229s 00:09:18.248 sys 0m5.564s 00:09:18.248 15:57:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.248 ************************************ 00:09:18.248 END TEST nvme_doorbell_aers 00:09:18.248 15:57:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:18.248 ************************************ 00:09:18.248 15:57:16 nvme -- nvme/nvme.sh@97 -- # uname 00:09:18.248 15:57:16 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:18.249 15:57:16 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:18.249 15:57:16 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:18.249 15:57:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.249 15:57:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 ************************************ 00:09:18.249 START TEST nvme_multi_aen 00:09:18.249 ************************************ 00:09:18.249 15:57:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:18.249 [2024-11-20 15:57:16.490987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.491064] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.491077] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.492325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.492361] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.492369] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.493198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.493222] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.493231] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.494267] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.494376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.249 [2024-11-20 15:57:16.494453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:18.505 Child process pid: 63795 00:09:18.505 [Child] Asynchronous Event Request test 00:09:18.505 [Child] Attached to 0000:00:10.0 00:09:18.505 [Child] Attached to 0000:00:11.0 00:09:18.505 [Child] Attached to 0000:00:13.0 00:09:18.505 [Child] Attached to 0000:00:12.0 00:09:18.505 [Child] Registering asynchronous event callbacks... 00:09:18.505 [Child] Getting orig temperature thresholds of all controllers 00:09:18.505 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.505 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.505 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.505 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.505 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:18.505 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.505 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.505 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.505 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.505 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.505 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.505 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.505 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.505 [Child] Cleaning up... 00:09:18.505 Asynchronous Event Request test 00:09:18.505 Attached to 0000:00:10.0 00:09:18.505 Attached to 0000:00:11.0 00:09:18.505 Attached to 0000:00:13.0 00:09:18.505 Attached to 0000:00:12.0 00:09:18.505 Reset controller to setup AER completions for this process 00:09:18.505 Registering asynchronous event callbacks... 00:09:18.505 Getting orig temperature thresholds of all controllers 00:09:18.505 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.506 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.506 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.506 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:18.506 Setting all controllers temperature threshold low to trigger AER 00:09:18.506 Waiting for all controllers temperature threshold to be set lower 00:09:18.506 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.506 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:18.506 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.506 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:18.506 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.506 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:18.506 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:18.506 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:18.506 Waiting for all controllers to trigger AER and reset threshold 00:09:18.506 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.506 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.506 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.506 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.506 Cleaning up... 00:09:18.763 00:09:18.763 real 0m0.480s 00:09:18.763 user 0m0.146s 00:09:18.763 sys 0m0.218s 00:09:18.763 15:57:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.763 15:57:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:18.763 ************************************ 00:09:18.763 END TEST nvme_multi_aen 00:09:18.763 ************************************ 00:09:18.763 15:57:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:18.763 15:57:16 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.763 15:57:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.763 15:57:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.763 ************************************ 00:09:18.763 START TEST nvme_startup 00:09:18.763 ************************************ 00:09:18.763 15:57:16 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:18.763 Initializing NVMe Controllers 00:09:18.763 Attached to 0000:00:10.0 00:09:18.763 Attached to 0000:00:11.0 00:09:18.763 Attached to 0000:00:13.0 00:09:18.763 Attached to 0000:00:12.0 00:09:18.763 Initialization complete. 00:09:18.763 Time used:148659.328 (us). 00:09:18.763 00:09:18.763 real 0m0.209s 00:09:18.763 user 0m0.073s 00:09:18.763 sys 0m0.092s 00:09:18.763 15:57:17 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.763 15:57:17 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:18.763 ************************************ 00:09:18.763 END TEST nvme_startup 00:09:18.763 ************************************ 00:09:19.020 15:57:17 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:19.020 15:57:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.020 15:57:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.020 15:57:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.020 ************************************ 00:09:19.020 START TEST nvme_multi_secondary 00:09:19.020 ************************************ 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63845 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63846 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:19.020 15:57:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:22.291 Initializing NVMe Controllers 00:09:22.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:22.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:22.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:22.291 Initialization complete. Launching workers. 00:09:22.291 ======================================================== 00:09:22.291 Latency(us) 00:09:22.291 Device Information : IOPS MiB/s Average min max 00:09:22.291 PCIE (0000:00:10.0) NSID 1 from core 1: 8207.08 32.06 1948.14 658.43 6202.77 00:09:22.291 PCIE (0000:00:11.0) NSID 1 from core 1: 8207.08 32.06 1949.40 681.58 5934.09 00:09:22.291 PCIE (0000:00:13.0) NSID 1 from core 1: 8207.08 32.06 1949.43 688.28 6334.42 00:09:22.291 PCIE (0000:00:12.0) NSID 1 from core 1: 8212.41 32.08 1948.19 706.50 6078.44 00:09:22.291 PCIE (0000:00:12.0) NSID 2 from core 1: 8207.08 32.06 1949.44 694.78 6015.41 00:09:22.291 PCIE (0000:00:12.0) NSID 3 from core 1: 8207.08 32.06 1949.50 693.19 6630.60 00:09:22.291 ======================================================== 00:09:22.291 Total : 49247.79 192.37 1949.01 658.43 6630.60 00:09:22.291 00:09:22.291 Initializing NVMe Controllers 00:09:22.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:22.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:22.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:22.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:22.291 Initialization complete. Launching workers. 00:09:22.291 ======================================================== 00:09:22.291 Latency(us) 00:09:22.291 Device Information : IOPS MiB/s Average min max 00:09:22.291 PCIE (0000:00:10.0) NSID 1 from core 2: 3358.26 13.12 4763.05 1121.73 16734.40 00:09:22.291 PCIE (0000:00:11.0) NSID 1 from core 2: 3358.26 13.12 4763.43 1115.10 16850.65 00:09:22.291 PCIE (0000:00:13.0) NSID 1 from core 2: 3358.26 13.12 4764.48 1251.46 13616.78 00:09:22.291 PCIE (0000:00:12.0) NSID 1 from core 2: 3357.92 13.12 4765.01 1147.12 14395.47 00:09:22.291 PCIE (0000:00:12.0) NSID 2 from core 2: 3358.26 13.12 4764.21 1156.88 13633.42 00:09:22.291 PCIE (0000:00:12.0) NSID 3 from core 2: 3358.26 13.12 4770.95 1253.78 16879.87 00:09:22.291 ======================================================== 00:09:22.291 Total : 20149.20 78.71 4765.19 1115.10 16879.87 00:09:22.291 00:09:22.291 15:57:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63845 00:09:24.814 Initializing NVMe Controllers 00:09:24.814 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.814 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.814 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.814 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.814 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:24.814 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:24.814 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:24.814 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:24.814 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:24.814 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:24.814 Initialization complete. Launching workers. 00:09:24.814 ======================================================== 00:09:24.814 Latency(us) 00:09:24.814 Device Information : IOPS MiB/s Average min max 00:09:24.814 PCIE (0000:00:10.0) NSID 1 from core 0: 10914.74 42.64 1464.61 649.58 8110.02 00:09:24.814 PCIE (0000:00:11.0) NSID 1 from core 0: 10914.74 42.64 1465.48 670.19 7991.80 00:09:24.814 PCIE (0000:00:13.0) NSID 1 from core 0: 10914.74 42.64 1465.44 604.76 8169.25 00:09:24.814 PCIE (0000:00:12.0) NSID 1 from core 0: 10914.74 42.64 1465.41 581.29 8018.87 00:09:24.814 PCIE (0000:00:12.0) NSID 2 from core 0: 10914.74 42.64 1465.38 523.41 8007.29 00:09:24.814 PCIE (0000:00:12.0) NSID 3 from core 0: 10917.94 42.65 1464.92 494.59 8034.78 00:09:24.814 ======================================================== 00:09:24.814 Total : 65491.64 255.83 1465.21 494.59 8169.25 00:09:24.814 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63846 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63921 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63922 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:24.814 15:57:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:28.121 Initializing NVMe Controllers 00:09:28.121 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.121 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:28.121 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:28.121 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:28.121 Initialization complete. Launching workers. 00:09:28.121 ======================================================== 00:09:28.121 Latency(us) 00:09:28.121 Device Information : IOPS MiB/s Average min max 00:09:28.121 PCIE (0000:00:10.0) NSID 1 from core 0: 7792.81 30.44 2051.79 710.38 6977.45 00:09:28.121 PCIE (0000:00:11.0) NSID 1 from core 0: 7792.81 30.44 2052.63 729.02 7132.63 00:09:28.121 PCIE (0000:00:13.0) NSID 1 from core 0: 7792.81 30.44 2052.83 735.22 7038.19 00:09:28.121 PCIE (0000:00:12.0) NSID 1 from core 0: 7792.81 30.44 2052.92 746.78 7221.80 00:09:28.121 PCIE (0000:00:12.0) NSID 2 from core 0: 7792.81 30.44 2052.99 749.96 7159.48 00:09:28.121 PCIE (0000:00:12.0) NSID 3 from core 0: 7792.81 30.44 2052.99 738.85 7265.21 00:09:28.121 ======================================================== 00:09:28.121 Total : 46756.88 182.64 2052.69 710.38 7265.21 00:09:28.121 00:09:28.121 Initializing NVMe Controllers 00:09:28.121 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.121 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.121 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:28.121 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:28.121 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:28.121 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:28.121 Initialization complete. Launching workers. 00:09:28.121 ======================================================== 00:09:28.121 Latency(us) 00:09:28.121 Device Information : IOPS MiB/s Average min max 00:09:28.121 PCIE (0000:00:10.0) NSID 1 from core 1: 7822.72 30.56 2043.92 704.55 6158.39 00:09:28.121 PCIE (0000:00:11.0) NSID 1 from core 1: 7822.72 30.56 2044.97 715.75 5910.78 00:09:28.121 PCIE (0000:00:13.0) NSID 1 from core 1: 7822.72 30.56 2045.07 729.64 5853.50 00:09:28.121 PCIE (0000:00:12.0) NSID 1 from core 1: 7822.72 30.56 2045.11 722.95 5695.62 00:09:28.121 PCIE (0000:00:12.0) NSID 2 from core 1: 7822.72 30.56 2045.09 730.72 5250.62 00:09:28.121 PCIE (0000:00:12.0) NSID 3 from core 1: 7822.72 30.56 2045.05 727.13 5767.82 00:09:28.121 ======================================================== 00:09:28.121 Total : 46936.32 183.34 2044.87 704.55 6158.39 00:09:28.121 00:09:29.494 Initializing NVMe Controllers 00:09:29.494 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.494 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.494 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.494 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.494 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:29.494 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:29.494 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:29.494 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:29.494 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:29.494 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:29.494 Initialization complete. Launching workers. 00:09:29.494 ======================================================== 00:09:29.494 Latency(us) 00:09:29.494 Device Information : IOPS MiB/s Average min max 00:09:29.494 PCIE (0000:00:10.0) NSID 1 from core 2: 4554.24 17.79 3511.19 723.15 12908.22 00:09:29.494 PCIE (0000:00:11.0) NSID 1 from core 2: 4554.24 17.79 3509.80 739.30 13028.17 00:09:29.494 PCIE (0000:00:13.0) NSID 1 from core 2: 4554.24 17.79 3509.55 751.02 13037.70 00:09:29.494 PCIE (0000:00:12.0) NSID 1 from core 2: 4554.24 17.79 3509.72 684.80 12508.29 00:09:29.494 PCIE (0000:00:12.0) NSID 2 from core 2: 4554.24 17.79 3509.64 657.56 12501.73 00:09:29.494 PCIE (0000:00:12.0) NSID 3 from core 2: 4554.24 17.79 3509.60 613.34 13139.06 00:09:29.494 ======================================================== 00:09:29.494 Total : 27325.46 106.74 3509.92 613.34 13139.06 00:09:29.494 00:09:29.753 ************************************ 00:09:29.753 END TEST nvme_multi_secondary 00:09:29.753 ************************************ 00:09:29.753 15:57:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63921 00:09:29.753 15:57:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63922 00:09:29.753 00:09:29.753 real 0m10.715s 00:09:29.753 user 0m18.381s 00:09:29.753 sys 0m0.664s 00:09:29.753 15:57:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.753 15:57:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:29.753 15:57:27 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:29.753 15:57:27 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62876 ]] 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1094 -- # kill 62876 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1095 -- # wait 62876 00:09:29.753 [2024-11-20 15:57:27.797206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.797400] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.797430] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.797447] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.799640] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.799695] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.799710] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.799740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.802875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.802983] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.803013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.803041] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.806822] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.806907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.806937] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 [2024-11-20 15:57:27.806965] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63794) is not found. Dropping the request. 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:29.753 15:57:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.753 15:57:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.753 ************************************ 00:09:29.753 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:29.753 ************************************ 00:09:29.753 15:57:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:29.753 * Looking for test storage... 00:09:29.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:29.753 15:57:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.013 --rc genhtml_branch_coverage=1 00:09:30.013 --rc genhtml_function_coverage=1 00:09:30.013 --rc genhtml_legend=1 00:09:30.013 --rc geninfo_all_blocks=1 00:09:30.013 --rc geninfo_unexecuted_blocks=1 00:09:30.013 00:09:30.013 ' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.013 --rc genhtml_branch_coverage=1 00:09:30.013 --rc genhtml_function_coverage=1 00:09:30.013 --rc genhtml_legend=1 00:09:30.013 --rc geninfo_all_blocks=1 00:09:30.013 --rc geninfo_unexecuted_blocks=1 00:09:30.013 00:09:30.013 ' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.013 --rc genhtml_branch_coverage=1 00:09:30.013 --rc genhtml_function_coverage=1 00:09:30.013 --rc genhtml_legend=1 00:09:30.013 --rc geninfo_all_blocks=1 00:09:30.013 --rc geninfo_unexecuted_blocks=1 00:09:30.013 00:09:30.013 ' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.013 --rc genhtml_branch_coverage=1 00:09:30.013 --rc genhtml_function_coverage=1 00:09:30.013 --rc genhtml_legend=1 00:09:30.013 --rc geninfo_all_blocks=1 00:09:30.013 --rc geninfo_unexecuted_blocks=1 00:09:30.013 00:09:30.013 ' 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:30.013 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:30.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64078 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64078 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64078 ']' 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.014 15:57:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 [2024-11-20 15:57:28.214039] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:30.014 [2024-11-20 15:57:28.214331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64078 ] 00:09:30.284 [2024-11-20 15:57:28.377926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.284 [2024-11-20 15:57:28.528864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.284 [2024-11-20 15:57:28.528955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.284 [2024-11-20 15:57:28.529041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.284 [2024-11-20 15:57:28.529064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.221 nvme0n1 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eJRrW.txt 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.221 true 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732118249 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64101 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:31.221 15:57:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 [2024-11-20 15:57:31.260688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:33.148 [2024-11-20 15:57:31.261272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:33.148 [2024-11-20 15:57:31.261317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.148 [2024-11-20 15:57:31.261333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.148 [2024-11-20 15:57:31.262996] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:33.148 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64101 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64101 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64101 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eJRrW.txt 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:33.148 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eJRrW.txt 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64078 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64078 ']' 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64078 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64078 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.149 killing process with pid 64078 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64078' 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64078 00:09:33.149 15:57:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64078 00:09:35.045 15:57:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:35.045 15:57:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:35.045 00:09:35.045 real 0m4.965s 00:09:35.045 user 0m17.630s 00:09:35.045 sys 0m0.506s 00:09:35.045 15:57:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.045 15:57:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.045 ************************************ 00:09:35.045 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:35.045 ************************************ 00:09:35.045 15:57:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:35.045 15:57:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:35.045 15:57:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.045 15:57:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.045 15:57:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.045 ************************************ 00:09:35.045 START TEST nvme_fio 00:09:35.045 ************************************ 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:35.045 15:57:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:35.045 15:57:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:35.045 15:57:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.045 15:57:32 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:35.045 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:35.045 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.045 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:35.045 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:35.045 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:35.046 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:35.046 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:35.046 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:35.046 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:35.303 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:35.303 15:57:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:35.303 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:35.304 15:57:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:35.561 fio-3.35 00:09:35.561 Starting 1 thread 00:09:42.112 00:09:42.112 test: (groupid=0, jobs=1): err= 0: pid=64247: Wed Nov 20 15:57:40 2024 00:09:42.112 read: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(186MiB/2001msec) 00:09:42.112 slat (nsec): min=3337, max=64552, avg=4903.31, stdev=1956.24 00:09:42.112 clat (usec): min=232, max=7656, avg=2683.05, stdev=685.85 00:09:42.112 lat (usec): min=237, max=7668, avg=2687.96, stdev=687.01 00:09:42.112 clat percentiles (usec): 00:09:42.112 | 1.00th=[ 1582], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:42.112 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:09:42.112 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3458], 95.00th=[ 4228], 00:09:42.112 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 6652], 99.95th=[ 7111], 00:09:42.112 | 99.99th=[ 7570] 00:09:42.112 bw ( KiB/s): min=92112, max=98096, per=99.39%, avg=94714.67, stdev=3067.05, samples=3 00:09:42.112 iops : min=23028, max=24524, avg=23678.67, stdev=766.76, samples=3 00:09:42.112 write: IOPS=23.7k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets 00:09:42.112 slat (nsec): min=3447, max=56097, avg=5159.40, stdev=1974.64 00:09:42.112 clat (usec): min=206, max=7634, avg=2686.02, stdev=683.33 00:09:42.112 lat (usec): min=210, max=7647, avg=2691.18, stdev=684.50 00:09:42.112 clat percentiles (usec): 00:09:42.112 | 1.00th=[ 1565], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:42.112 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:09:42.112 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3458], 95.00th=[ 4228], 00:09:42.112 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 7242], 00:09:42.112 | 99.99th=[ 7570] 00:09:42.112 bw ( KiB/s): min=92976, max=97744, per=100.00%, avg=94717.33, stdev=2631.07, samples=3 00:09:42.112 iops : min=23244, max=24436, avg=23679.33, stdev=657.77, samples=3 00:09:42.112 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.11% 00:09:42.112 lat (msec) : 2=3.33%, 4=90.51%, 10=6.01% 00:09:42.112 cpu : usr=99.25%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:42.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:42.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.112 issued rwts: total=47671,47357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.112 00:09:42.112 Run status group 0 (all jobs): 00:09:42.112 READ: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:42.112 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:42.112 ----------------------------------------------------- 00:09:42.112 Suppressions used: 00:09:42.112 count bytes template 00:09:42.112 1 32 /usr/src/fio/parse.c 00:09:42.112 1 8 libtcmalloc_minimal.so 00:09:42.112 ----------------------------------------------------- 00:09:42.112 00:09:42.112 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:42.112 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:42.112 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:42.112 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.369 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.369 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:42.625 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:42.625 15:57:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:42.625 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:42.626 15:57:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.883 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:42.883 fio-3.35 00:09:42.883 Starting 1 thread 00:09:57.814 00:09:57.814 test: (groupid=0, jobs=1): err= 0: pid=64313: Wed Nov 20 15:57:54 2024 00:09:57.814 read: IOPS=23.3k, BW=90.8MiB/s (95.2MB/s)(182MiB/2001msec) 00:09:57.814 slat (usec): min=4, max=111, avg= 4.99, stdev= 2.08 00:09:57.814 clat (usec): min=246, max=8193, avg=2749.34, stdev=723.20 00:09:57.814 lat (usec): min=251, max=8198, avg=2754.34, stdev=724.38 00:09:57.814 clat percentiles (usec): 00:09:57.814 | 1.00th=[ 1680], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:57.814 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:57.814 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3195], 95.00th=[ 4359], 00:09:57.814 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:57.814 | 99.99th=[ 8029] 00:09:57.814 bw ( KiB/s): min=88208, max=95008, per=99.32%, avg=92376.00, stdev=3650.95, samples=3 00:09:57.814 iops : min=22052, max=23752, avg=23094.00, stdev=912.74, samples=3 00:09:57.814 write: IOPS=23.1k, BW=90.3MiB/s (94.6MB/s)(181MiB/2001msec); 0 zone resets 00:09:57.814 slat (nsec): min=4285, max=51749, avg=5242.14, stdev=2016.07 00:09:57.814 clat (usec): min=224, max=8273, avg=2750.65, stdev=722.19 00:09:57.814 lat (usec): min=228, max=8279, avg=2755.89, stdev=723.35 00:09:57.814 clat percentiles (usec): 00:09:57.814 | 1.00th=[ 1713], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2442], 00:09:57.814 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2606], 00:09:57.814 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3163], 95.00th=[ 4293], 00:09:57.814 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:57.814 | 99.99th=[ 8160] 00:09:57.814 bw ( KiB/s): min=89480, max=94176, per=100.00%, avg=92461.33, stdev=2591.61, samples=3 00:09:57.814 iops : min=22370, max=23544, avg=23115.33, stdev=647.90, samples=3 00:09:57.814 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:57.814 lat (msec) : 2=2.58%, 4=91.06%, 10=6.30% 00:09:57.814 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=607 00:09:57.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:57.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.814 issued rwts: total=46526,46233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.814 00:09:57.814 Run status group 0 (all jobs): 00:09:57.814 READ: bw=90.8MiB/s (95.2MB/s), 90.8MiB/s-90.8MiB/s (95.2MB/s-95.2MB/s), io=182MiB (191MB), run=2001-2001msec 00:09:57.814 WRITE: bw=90.3MiB/s (94.6MB/s), 90.3MiB/s-90.3MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:09:57.814 ----------------------------------------------------- 00:09:57.814 Suppressions used: 00:09:57.814 count bytes template 00:09:57.814 1 32 /usr/src/fio/parse.c 00:09:57.814 1 8 libtcmalloc_minimal.so 00:09:57.814 ----------------------------------------------------- 00:09:57.814 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:57.814 15:57:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:57.814 15:57:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:57.814 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:57.814 fio-3.35 00:09:57.814 Starting 1 thread 00:10:15.901 00:10:15.901 test: (groupid=0, jobs=1): err= 0: pid=64364: Wed Nov 20 15:58:11 2024 00:10:15.901 read: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(169MiB/2023msec) 00:10:15.901 slat (nsec): min=3363, max=61197, avg=5102.10, stdev=2141.49 00:10:15.901 clat (usec): min=876, max=25868, avg=2834.26, stdev=1000.95 00:10:15.901 lat (usec): min=880, max=25873, avg=2839.36, stdev=1001.98 00:10:15.901 clat percentiles (usec): 00:10:15.901 | 1.00th=[ 1876], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:15.901 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:15.901 | 70.00th=[ 2638], 80.00th=[ 2868], 90.00th=[ 3818], 95.00th=[ 4752], 00:10:15.901 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 8717], 99.95th=[23200], 00:10:15.901 | 99.99th=[24773] 00:10:15.901 bw ( KiB/s): min=73616, max=94448, per=100.00%, avg=86602.00, stdev=9135.73, samples=4 00:10:15.901 iops : min=18404, max=23612, avg=21650.50, stdev=2283.93, samples=4 00:10:15.901 write: IOPS=21.3k, BW=83.0MiB/s (87.1MB/s)(168MiB/2023msec); 0 zone resets 00:10:15.901 slat (nsec): min=3515, max=59880, avg=5359.44, stdev=2204.99 00:10:15.901 clat (usec): min=884, max=46960, avg=3141.28, stdev=2721.59 00:10:15.901 lat (usec): min=897, max=46965, avg=3146.64, stdev=2721.98 00:10:15.901 clat percentiles (usec): 00:10:15.901 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:15.901 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:15.901 | 70.00th=[ 2671], 80.00th=[ 2868], 90.00th=[ 3916], 95.00th=[ 5342], 00:10:15.901 | 99.00th=[20055], 99.50th=[23462], 99.90th=[33817], 99.95th=[39584], 00:10:15.901 | 99.99th=[46924] 00:10:15.901 bw ( KiB/s): min=69992, max=94320, per=100.00%, avg=85824.00, stdev=11128.38, samples=4 00:10:15.901 iops : min=17498, max=23580, avg=21456.00, stdev=2782.10, samples=4 00:10:15.901 lat (usec) : 1000=0.01% 00:10:15.901 lat (msec) : 2=1.40%, 4=89.67%, 10=8.00%, 20=0.38%, 50=0.55% 00:10:15.901 cpu : usr=99.26%, sys=0.05%, ctx=3, majf=0, minf=607 00:10:15.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:15.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.901 issued rwts: total=43329,43011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.901 00:10:15.901 Run status group 0 (all jobs): 00:10:15.901 READ: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=169MiB (177MB), run=2023-2023msec 00:10:15.901 WRITE: bw=83.0MiB/s (87.1MB/s), 83.0MiB/s-83.0MiB/s (87.1MB/s-87.1MB/s), io=168MiB (176MB), run=2023-2023msec 00:10:15.901 ----------------------------------------------------- 00:10:15.901 Suppressions used: 00:10:15.901 count bytes template 00:10:15.901 1 32 /usr/src/fio/parse.c 00:10:15.901 1 8 libtcmalloc_minimal.so 00:10:15.901 ----------------------------------------------------- 00:10:15.901 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:15.901 15:58:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:15.901 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:15.902 15:58:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:15.902 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:15.902 fio-3.35 00:10:15.902 Starting 1 thread 00:10:24.006 00:10:24.006 test: (groupid=0, jobs=1): err= 0: pid=64425: Wed Nov 20 15:58:21 2024 00:10:24.006 read: IOPS=20.9k, BW=81.8MiB/s (85.7MB/s)(165MiB/2018msec) 00:10:24.006 slat (nsec): min=3352, max=68991, avg=5220.36, stdev=2300.53 00:10:24.006 clat (usec): min=576, max=22434, avg=2901.89, stdev=1029.62 00:10:24.006 lat (usec): min=580, max=22437, avg=2907.11, stdev=1030.74 00:10:24.006 clat percentiles (usec): 00:10:24.006 | 1.00th=[ 1729], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:24.006 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:10:24.006 | 70.00th=[ 2737], 80.00th=[ 3064], 90.00th=[ 4015], 95.00th=[ 5014], 00:10:24.006 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8717], 99.95th=[19268], 00:10:24.006 | 99.99th=[21365] 00:10:24.006 bw ( KiB/s): min=70128, max=93320, per=100.00%, avg=84398.00, stdev=10036.68, samples=4 00:10:24.006 iops : min=17532, max=23330, avg=21099.50, stdev=2509.17, samples=4 00:10:24.006 write: IOPS=20.8k, BW=81.2MiB/s (85.2MB/s)(164MiB/2018msec); 0 zone resets 00:10:24.006 slat (nsec): min=3452, max=71024, avg=5483.22, stdev=2299.94 00:10:24.006 clat (usec): min=620, max=38416, avg=3211.67, stdev=2845.21 00:10:24.006 lat (usec): min=623, max=38420, avg=3217.15, stdev=2845.64 00:10:24.006 clat percentiles (usec): 00:10:24.006 | 1.00th=[ 1762], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:24.006 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2606], 00:10:24.006 | 70.00th=[ 2737], 80.00th=[ 3097], 90.00th=[ 4080], 95.00th=[ 5407], 00:10:24.006 | 99.00th=[22676], 99.50th=[25297], 99.90th=[29492], 99.95th=[31065], 00:10:24.006 | 99.99th=[34866] 00:10:24.006 bw ( KiB/s): min=67000, max=94232, per=100.00%, avg=83764.00, stdev=11770.78, samples=4 00:10:24.006 iops : min=16750, max=23558, avg=20941.00, stdev=2942.69, samples=4 00:10:24.006 lat (usec) : 750=0.01%, 1000=0.05% 00:10:24.006 lat (msec) : 2=1.84%, 4=87.86%, 10=9.35%, 20=0.14%, 50=0.75% 00:10:24.006 cpu : usr=99.26%, sys=0.05%, ctx=4, majf=0, minf=606 00:10:24.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.006 issued rwts: total=42234,41974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.006 00:10:24.006 Run status group 0 (all jobs): 00:10:24.006 READ: bw=81.8MiB/s (85.7MB/s), 81.8MiB/s-81.8MiB/s (85.7MB/s-85.7MB/s), io=165MiB (173MB), run=2018-2018msec 00:10:24.006 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=164MiB (172MB), run=2018-2018msec 00:10:24.006 ----------------------------------------------------- 00:10:24.006 Suppressions used: 00:10:24.006 count bytes template 00:10:24.006 1 32 /usr/src/fio/parse.c 00:10:24.006 1 8 libtcmalloc_minimal.so 00:10:24.006 ----------------------------------------------------- 00:10:24.006 00:10:24.006 ************************************ 00:10:24.006 END TEST nvme_fio 00:10:24.006 ************************************ 00:10:24.006 15:58:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:24.006 15:58:21 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:24.006 00:10:24.006 real 0m48.341s 00:10:24.006 user 0m26.658s 00:10:24.006 sys 0m41.245s 00:10:24.006 15:58:21 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.006 15:58:21 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 ************************************ 00:10:24.006 END TEST nvme 00:10:24.006 ************************************ 00:10:24.006 00:10:24.006 real 1m57.799s 00:10:24.006 user 3m48.557s 00:10:24.006 sys 0m51.686s 00:10:24.006 15:58:21 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.006 15:58:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 15:58:21 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:24.006 15:58:21 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:24.006 15:58:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.006 15:58:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.006 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:10:24.006 ************************************ 00:10:24.006 START TEST nvme_scc 00:10:24.006 ************************************ 00:10:24.006 15:58:21 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:24.006 * Looking for test storage... 00:10:24.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:24.006 15:58:21 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.006 15:58:21 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.006 15:58:21 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.006 15:58:21 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:24.006 15:58:21 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:24.007 15:58:21 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.007 15:58:21 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.007 --rc genhtml_branch_coverage=1 00:10:24.007 --rc genhtml_function_coverage=1 00:10:24.007 --rc genhtml_legend=1 00:10:24.007 --rc geninfo_all_blocks=1 00:10:24.007 --rc geninfo_unexecuted_blocks=1 00:10:24.007 00:10:24.007 ' 00:10:24.007 15:58:21 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.007 --rc genhtml_branch_coverage=1 00:10:24.007 --rc genhtml_function_coverage=1 00:10:24.007 --rc genhtml_legend=1 00:10:24.007 --rc geninfo_all_blocks=1 00:10:24.007 --rc geninfo_unexecuted_blocks=1 00:10:24.007 00:10:24.007 ' 00:10:24.007 15:58:21 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.007 --rc genhtml_branch_coverage=1 00:10:24.007 --rc genhtml_function_coverage=1 00:10:24.007 --rc genhtml_legend=1 00:10:24.007 --rc geninfo_all_blocks=1 00:10:24.007 --rc geninfo_unexecuted_blocks=1 00:10:24.007 00:10:24.007 ' 00:10:24.007 15:58:21 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.007 --rc genhtml_branch_coverage=1 00:10:24.007 --rc genhtml_function_coverage=1 00:10:24.007 --rc genhtml_legend=1 00:10:24.007 --rc geninfo_all_blocks=1 00:10:24.007 --rc geninfo_unexecuted_blocks=1 00:10:24.007 00:10:24.007 ' 00:10:24.007 15:58:21 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.007 15:58:21 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.007 15:58:21 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.007 15:58:21 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.007 15:58:21 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.007 15:58:21 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:24.007 15:58:21 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:24.007 15:58:21 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:24.007 15:58:21 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.007 15:58:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:24.007 15:58:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:24.007 15:58:21 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:24.007 15:58:21 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.007 Waiting for block devices as requested 00:10:24.007 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.007 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.007 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.326 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:29.326 15:58:27 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:29.326 15:58:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.326 15:58:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:29.326 15:58:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.326 15:58:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:29.326 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:29.327 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.328 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.329 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:29.330 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:29.331 15:58:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.331 15:58:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:29.331 15:58:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.331 15:58:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.331 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:29.332 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.333 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.334 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.335 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:29.336 15:58:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.336 15:58:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:29.336 15:58:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.336 15:58:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:29.336 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:29.337 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.338 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:29.339 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.340 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.341 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.342 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.343 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.344 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:29.345 15:58:27 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.345 15:58:27 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:29.345 15:58:27 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.345 15:58:27 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:29.345 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.346 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:29.347 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.605 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:29.606 15:58:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:29.606 15:58:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:29.607 15:58:27 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:29.607 15:58:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:29.607 15:58:27 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:29.607 15:58:27 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.429 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.429 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.429 15:58:28 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.429 15:58:28 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.429 15:58:28 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.429 15:58:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.429 ************************************ 00:10:30.429 START TEST nvme_simple_copy 00:10:30.429 ************************************ 00:10:30.429 15:58:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.685 Initializing NVMe Controllers 00:10:30.685 Attaching to 0000:00:10.0 00:10:30.685 Controller supports SCC. Attached to 0000:00:10.0 00:10:30.685 Namespace ID: 1 size: 6GB 00:10:30.685 Initialization complete. 00:10:30.685 00:10:30.685 Controller QEMU NVMe Ctrl (12340 ) 00:10:30.685 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:30.685 Namespace Block Size:4096 00:10:30.685 Writing LBAs 0 to 63 with Random Data 00:10:30.685 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:30.685 LBAs matching Written Data: 64 00:10:30.685 00:10:30.685 real 0m0.262s 00:10:30.685 user 0m0.101s 00:10:30.685 sys 0m0.058s 00:10:30.685 15:58:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.685 15:58:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 ************************************ 00:10:30.685 END TEST nvme_simple_copy 00:10:30.685 ************************************ 00:10:30.685 00:10:30.685 real 0m7.497s 00:10:30.685 user 0m0.999s 00:10:30.685 sys 0m1.229s 00:10:30.685 15:58:28 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.685 15:58:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 ************************************ 00:10:30.685 END TEST nvme_scc 00:10:30.685 ************************************ 00:10:30.685 15:58:28 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:30.685 15:58:28 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:30.685 15:58:28 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:30.685 15:58:28 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:30.685 15:58:28 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:30.685 15:58:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.685 15:58:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.685 15:58:28 -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 ************************************ 00:10:30.686 START TEST nvme_fdp 00:10:30.686 ************************************ 00:10:30.686 15:58:28 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:30.945 * Looking for test storage... 00:10:30.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:30.945 15:58:28 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.945 15:58:28 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.945 15:58:28 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.945 15:58:29 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.945 --rc genhtml_branch_coverage=1 00:10:30.945 --rc genhtml_function_coverage=1 00:10:30.945 --rc genhtml_legend=1 00:10:30.945 --rc geninfo_all_blocks=1 00:10:30.945 --rc geninfo_unexecuted_blocks=1 00:10:30.945 00:10:30.945 ' 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.945 --rc genhtml_branch_coverage=1 00:10:30.945 --rc genhtml_function_coverage=1 00:10:30.945 --rc genhtml_legend=1 00:10:30.945 --rc geninfo_all_blocks=1 00:10:30.945 --rc geninfo_unexecuted_blocks=1 00:10:30.945 00:10:30.945 ' 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.945 --rc genhtml_branch_coverage=1 00:10:30.945 --rc genhtml_function_coverage=1 00:10:30.945 --rc genhtml_legend=1 00:10:30.945 --rc geninfo_all_blocks=1 00:10:30.945 --rc geninfo_unexecuted_blocks=1 00:10:30.945 00:10:30.945 ' 00:10:30.945 15:58:29 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.945 --rc genhtml_branch_coverage=1 00:10:30.945 --rc genhtml_function_coverage=1 00:10:30.945 --rc genhtml_legend=1 00:10:30.945 --rc geninfo_all_blocks=1 00:10:30.945 --rc geninfo_unexecuted_blocks=1 00:10:30.945 00:10:30.945 ' 00:10:30.946 15:58:29 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.946 15:58:29 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.946 15:58:29 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.946 15:58:29 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.946 15:58:29 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.946 15:58:29 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.946 15:58:29 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.946 15:58:29 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.946 15:58:29 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:30.946 15:58:29 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:30.946 15:58:29 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:30.946 15:58:29 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:30.946 15:58:29 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:31.207 Waiting for block devices as requested 00:10:31.465 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.465 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.465 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.465 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.729 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:36.729 15:58:34 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:36.729 15:58:34 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:36.730 15:58:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.730 15:58:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:36.730 15:58:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.730 15:58:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.730 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:36.731 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.732 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.733 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:36.734 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.735 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:36.736 15:58:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.736 15:58:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:36.736 15:58:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.736 15:58:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.736 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.737 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.738 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.739 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.740 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:36.741 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:36.742 15:58:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:36.742 15:58:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:36.742 15:58:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:36.742 15:58:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.742 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.743 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:36.744 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:36.745 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.746 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.747 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.010 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.011 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.012 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:37.013 15:58:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.013 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.014 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.015 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:37.016 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:37.017 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.018 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:37.019 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:37.020 15:58:35 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.020 15:58:35 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:37.020 15:58:35 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.020 15:58:35 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:37.020 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.021 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.022 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.023 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:37.024 15:58:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:37.024 15:58:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:37.025 15:58:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:37.025 15:58:35 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:37.025 15:58:35 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:37.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:37.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.846 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.846 15:58:36 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:37.847 15:58:36 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.847 15:58:36 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.847 15:58:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:38.105 ************************************ 00:10:38.105 START TEST nvme_flexible_data_placement 00:10:38.105 ************************************ 00:10:38.105 15:58:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:38.105 Initializing NVMe Controllers 00:10:38.105 Attaching to 0000:00:13.0 00:10:38.105 Controller supports FDP Attached to 0000:00:13.0 00:10:38.105 Namespace ID: 1 Endurance Group ID: 1 00:10:38.106 Initialization complete. 00:10:38.106 00:10:38.106 ================================== 00:10:38.106 == FDP tests for Namespace: #01 == 00:10:38.106 ================================== 00:10:38.106 00:10:38.106 Get Feature: FDP: 00:10:38.106 ================= 00:10:38.106 Enabled: Yes 00:10:38.106 FDP configuration Index: 0 00:10:38.106 00:10:38.106 FDP configurations log page 00:10:38.106 =========================== 00:10:38.106 Number of FDP configurations: 1 00:10:38.106 Version: 0 00:10:38.106 Size: 112 00:10:38.106 FDP Configuration Descriptor: 0 00:10:38.106 Descriptor Size: 96 00:10:38.106 Reclaim Group Identifier format: 2 00:10:38.106 FDP Volatile Write Cache: Not Present 00:10:38.106 FDP Configuration: Valid 00:10:38.106 Vendor Specific Size: 0 00:10:38.106 Number of Reclaim Groups: 2 00:10:38.106 Number of Recalim Unit Handles: 8 00:10:38.106 Max Placement Identifiers: 128 00:10:38.106 Number of Namespaces Suppprted: 256 00:10:38.106 Reclaim unit Nominal Size: 6000000 bytes 00:10:38.106 Estimated Reclaim Unit Time Limit: Not Reported 00:10:38.106 RUH Desc #000: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #001: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #002: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #003: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #004: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #005: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #006: RUH Type: Initially Isolated 00:10:38.106 RUH Desc #007: RUH Type: Initially Isolated 00:10:38.106 00:10:38.106 FDP reclaim unit handle usage log page 00:10:38.106 ====================================== 00:10:38.106 Number of Reclaim Unit Handles: 8 00:10:38.106 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:38.106 RUH Usage Desc #001: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #002: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #003: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #004: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #005: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #006: RUH Attributes: Unused 00:10:38.106 RUH Usage Desc #007: RUH Attributes: Unused 00:10:38.106 00:10:38.106 FDP statistics log page 00:10:38.106 ======================= 00:10:38.106 Host bytes with metadata written: 990842880 00:10:38.106 Media bytes with metadata written: 990949376 00:10:38.106 Media bytes erased: 0 00:10:38.106 00:10:38.106 FDP Reclaim unit handle status 00:10:38.106 ============================== 00:10:38.106 Number of RUHS descriptors: 2 00:10:38.106 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000f0f 00:10:38.106 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:38.106 00:10:38.106 FDP write on placement id: 0 success 00:10:38.106 00:10:38.106 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:38.106 00:10:38.106 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:38.106 00:10:38.106 Get Feature: FDP Events for Placement handle: #0 00:10:38.106 ======================== 00:10:38.106 Number of FDP Events: 6 00:10:38.106 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:38.106 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:38.106 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:38.106 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:38.106 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:38.106 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:38.106 00:10:38.106 FDP events log page 00:10:38.106 =================== 00:10:38.106 Number of FDP events: 1 00:10:38.106 FDP Event #0: 00:10:38.106 Event Type: RU Not Written to Capacity 00:10:38.106 Placement Identifier: Valid 00:10:38.106 NSID: Valid 00:10:38.106 Location: Valid 00:10:38.106 Placement Identifier: 0 00:10:38.106 Event Timestamp: 4 00:10:38.106 Namespace Identifier: 1 00:10:38.106 Reclaim Group Identifier: 0 00:10:38.106 Reclaim Unit Handle Identifier: 0 00:10:38.106 00:10:38.106 FDP test passed 00:10:38.106 00:10:38.106 real 0m0.227s 00:10:38.106 user 0m0.064s 00:10:38.106 sys 0m0.062s 00:10:38.106 15:58:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.106 15:58:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:38.106 ************************************ 00:10:38.106 END TEST nvme_flexible_data_placement 00:10:38.106 ************************************ 00:10:38.363 00:10:38.363 real 0m7.466s 00:10:38.363 user 0m1.068s 00:10:38.363 sys 0m1.346s 00:10:38.363 15:58:36 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.363 ************************************ 00:10:38.363 15:58:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:38.363 END TEST nvme_fdp 00:10:38.363 ************************************ 00:10:38.363 15:58:36 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:38.363 15:58:36 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.363 15:58:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.363 15:58:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.363 15:58:36 -- common/autotest_common.sh@10 -- # set +x 00:10:38.363 ************************************ 00:10:38.363 START TEST nvme_rpc 00:10:38.363 ************************************ 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.363 * Looking for test storage... 00:10:38.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.363 15:58:36 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.363 --rc genhtml_branch_coverage=1 00:10:38.363 --rc genhtml_function_coverage=1 00:10:38.363 --rc genhtml_legend=1 00:10:38.363 --rc geninfo_all_blocks=1 00:10:38.363 --rc geninfo_unexecuted_blocks=1 00:10:38.363 00:10:38.363 ' 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.363 --rc genhtml_branch_coverage=1 00:10:38.363 --rc genhtml_function_coverage=1 00:10:38.363 --rc genhtml_legend=1 00:10:38.363 --rc geninfo_all_blocks=1 00:10:38.363 --rc geninfo_unexecuted_blocks=1 00:10:38.363 00:10:38.363 ' 00:10:38.363 15:58:36 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.363 --rc genhtml_branch_coverage=1 00:10:38.364 --rc genhtml_function_coverage=1 00:10:38.364 --rc genhtml_legend=1 00:10:38.364 --rc geninfo_all_blocks=1 00:10:38.364 --rc geninfo_unexecuted_blocks=1 00:10:38.364 00:10:38.364 ' 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.364 --rc genhtml_branch_coverage=1 00:10:38.364 --rc genhtml_function_coverage=1 00:10:38.364 --rc genhtml_legend=1 00:10:38.364 --rc geninfo_all_blocks=1 00:10:38.364 --rc geninfo_unexecuted_blocks=1 00:10:38.364 00:10:38.364 ' 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65802 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:38.364 15:58:36 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65802 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65802 ']' 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.364 15:58:36 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.623 [2024-11-20 15:58:36.659535] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:10:38.623 [2024-11-20 15:58:36.659659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65802 ] 00:10:38.623 [2024-11-20 15:58:36.820062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.920 [2024-11-20 15:58:36.922675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.920 [2024-11-20 15:58:36.922687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.482 15:58:37 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.482 15:58:37 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:39.482 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:39.739 Nvme0n1 00:10:39.739 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:39.739 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:39.739 request: 00:10:39.739 { 00:10:39.739 "bdev_name": "Nvme0n1", 00:10:39.739 "filename": "non_existing_file", 00:10:39.739 "method": "bdev_nvme_apply_firmware", 00:10:39.739 "req_id": 1 00:10:39.739 } 00:10:39.739 Got JSON-RPC error response 00:10:39.739 response: 00:10:39.739 { 00:10:39.739 "code": -32603, 00:10:39.739 "message": "open file failed." 00:10:39.739 } 00:10:39.997 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:39.997 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:39.997 15:58:37 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:39.997 15:58:38 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:39.997 15:58:38 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65802 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65802 ']' 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65802 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65802 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65802' 00:10:39.997 killing process with pid 65802 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65802 00:10:39.997 15:58:38 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65802 00:10:41.920 00:10:41.920 real 0m3.300s 00:10:41.920 user 0m6.297s 00:10:41.920 sys 0m0.492s 00:10:41.920 ************************************ 00:10:41.920 END TEST nvme_rpc 00:10:41.920 ************************************ 00:10:41.920 15:58:39 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.920 15:58:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.920 15:58:39 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:41.920 15:58:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.920 15:58:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.920 15:58:39 -- common/autotest_common.sh@10 -- # set +x 00:10:41.920 ************************************ 00:10:41.920 START TEST nvme_rpc_timeouts 00:10:41.920 ************************************ 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:41.920 * Looking for test storage... 00:10:41.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.920 15:58:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.920 --rc genhtml_branch_coverage=1 00:10:41.920 --rc genhtml_function_coverage=1 00:10:41.920 --rc genhtml_legend=1 00:10:41.920 --rc geninfo_all_blocks=1 00:10:41.920 --rc geninfo_unexecuted_blocks=1 00:10:41.920 00:10:41.920 ' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.920 --rc genhtml_branch_coverage=1 00:10:41.920 --rc genhtml_function_coverage=1 00:10:41.920 --rc genhtml_legend=1 00:10:41.920 --rc geninfo_all_blocks=1 00:10:41.920 --rc geninfo_unexecuted_blocks=1 00:10:41.920 00:10:41.920 ' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.920 --rc genhtml_branch_coverage=1 00:10:41.920 --rc genhtml_function_coverage=1 00:10:41.920 --rc genhtml_legend=1 00:10:41.920 --rc geninfo_all_blocks=1 00:10:41.920 --rc geninfo_unexecuted_blocks=1 00:10:41.920 00:10:41.920 ' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.920 --rc genhtml_branch_coverage=1 00:10:41.920 --rc genhtml_function_coverage=1 00:10:41.920 --rc genhtml_legend=1 00:10:41.920 --rc geninfo_all_blocks=1 00:10:41.920 --rc geninfo_unexecuted_blocks=1 00:10:41.920 00:10:41.920 ' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65867 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65867 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65899 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:41.920 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65899 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65899 ']' 00:10:41.920 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.921 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.921 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.921 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.921 15:58:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:41.921 15:58:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:41.921 [2024-11-20 15:58:39.941306] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:10:41.921 [2024-11-20 15:58:39.941423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65899 ] 00:10:41.921 [2024-11-20 15:58:40.099471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.177 [2024-11-20 15:58:40.204347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.177 [2024-11-20 15:58:40.204615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.743 15:58:40 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.743 15:58:40 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:42.743 Checking default timeout settings: 00:10:42.744 15:58:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:42.744 15:58:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:43.000 Making settings changes with rpc: 00:10:43.000 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:43.000 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:43.258 Check default vs. modified settings: 00:10:43.258 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:43.258 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:43.824 Setting action_on_timeout is changed as expected. 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:43.824 Setting timeout_us is changed as expected. 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:43.824 Setting timeout_admin_us is changed as expected. 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65867 /tmp/settings_modified_65867 00:10:43.824 15:58:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65899 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65899 ']' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65899 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65899 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.824 killing process with pid 65899 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65899' 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65899 00:10:43.824 15:58:41 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65899 00:10:45.196 RPC TIMEOUT SETTING TEST PASSED. 00:10:45.196 15:58:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:45.196 00:10:45.196 real 0m3.560s 00:10:45.196 user 0m7.015s 00:10:45.196 sys 0m0.482s 00:10:45.196 15:58:43 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.196 15:58:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:45.196 ************************************ 00:10:45.196 END TEST nvme_rpc_timeouts 00:10:45.196 ************************************ 00:10:45.196 15:58:43 -- spdk/autotest.sh@239 -- # uname -s 00:10:45.196 15:58:43 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:45.196 15:58:43 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:45.196 15:58:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.196 15:58:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.196 15:58:43 -- common/autotest_common.sh@10 -- # set +x 00:10:45.196 ************************************ 00:10:45.196 START TEST sw_hotplug 00:10:45.196 ************************************ 00:10:45.196 15:58:43 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:45.196 * Looking for test storage... 00:10:45.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:45.196 15:58:43 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.196 15:58:43 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.196 15:58:43 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.455 15:58:43 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 15:58:43 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.821 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:45.821 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:45.821 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:45.821 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:45.821 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:45.821 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:45.821 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:45.821 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:45.821 15:58:43 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:45.822 15:58:43 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:45.822 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:45.822 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:45.822 15:58:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:46.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.339 Waiting for block devices as requested 00:10:46.339 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.339 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.339 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.339 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.594 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:51.594 15:58:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:51.594 15:58:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.852 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:51.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.852 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:52.110 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:52.367 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:52.367 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66750 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:52.367 15:58:50 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:52.367 15:58:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:52.624 Initializing NVMe Controllers 00:10:52.624 Attaching to 0000:00:10.0 00:10:52.624 Attaching to 0000:00:11.0 00:10:52.624 Attached to 0000:00:11.0 00:10:52.624 Attached to 0000:00:10.0 00:10:52.624 Initialization complete. Starting I/O... 00:10:52.624 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:52.624 QEMU NVMe Ctrl (12340 ): 5 I/Os completed (+5) 00:10:52.624 00:10:53.556 QEMU NVMe Ctrl (12341 ): 2660 I/Os completed (+2660) 00:10:53.556 QEMU NVMe Ctrl (12340 ): 2721 I/Os completed (+2716) 00:10:53.556 00:10:54.926 QEMU NVMe Ctrl (12341 ): 5813 I/Os completed (+3153) 00:10:54.926 QEMU NVMe Ctrl (12340 ): 5802 I/Os completed (+3081) 00:10:54.926 00:10:55.859 QEMU NVMe Ctrl (12341 ): 9209 I/Os completed (+3396) 00:10:55.859 QEMU NVMe Ctrl (12340 ): 9197 I/Os completed (+3395) 00:10:55.859 00:10:56.789 QEMU NVMe Ctrl (12341 ): 12365 I/Os completed (+3156) 00:10:56.789 QEMU NVMe Ctrl (12340 ): 12317 I/Os completed (+3120) 00:10:56.789 00:10:57.720 QEMU NVMe Ctrl (12341 ): 15647 I/Os completed (+3282) 00:10:57.721 QEMU NVMe Ctrl (12340 ): 15647 I/Os completed (+3330) 00:10:57.721 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.657 [2024-11-20 15:58:56.589089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.657 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:58.657 [2024-11-20 15:58:56.590417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.590468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.590487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.590504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:58.657 [2024-11-20 15:58:56.592241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.592293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.592309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.592323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.657 [2024-11-20 15:58:56.612683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.657 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:58.657 [2024-11-20 15:58:56.613760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.613801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.613821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.613838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:58.657 [2024-11-20 15:58:56.615492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.615528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.615544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 [2024-11-20 15:58:56.615557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.657 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.657 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.658 Attaching to 0000:00:10.0 00:10:58.658 Attached to 0000:00:10.0 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.658 15:58:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.658 Attaching to 0000:00:11.0 00:10:58.658 Attached to 0000:00:11.0 00:10:59.588 QEMU NVMe Ctrl (12340 ): 3567 I/Os completed (+3567) 00:10:59.588 QEMU NVMe Ctrl (12341 ): 3233 I/Os completed (+3233) 00:10:59.588 00:11:01.067 QEMU NVMe Ctrl (12340 ): 7107 I/Os completed (+3540) 00:11:01.067 QEMU NVMe Ctrl (12341 ): 6733 I/Os completed (+3500) 00:11:01.067 00:11:01.630 QEMU NVMe Ctrl (12340 ): 10314 I/Os completed (+3207) 00:11:01.630 QEMU NVMe Ctrl (12341 ): 10056 I/Os completed (+3323) 00:11:01.630 00:11:02.563 QEMU NVMe Ctrl (12340 ): 13407 I/Os completed (+3093) 00:11:02.563 QEMU NVMe Ctrl (12341 ): 13137 I/Os completed (+3081) 00:11:02.563 00:11:03.934 QEMU NVMe Ctrl (12340 ): 16502 I/Os completed (+3095) 00:11:03.934 QEMU NVMe Ctrl (12341 ): 16212 I/Os completed (+3075) 00:11:03.934 00:11:04.866 QEMU NVMe Ctrl (12340 ): 19694 I/Os completed (+3192) 00:11:04.866 QEMU NVMe Ctrl (12341 ): 19380 I/Os completed (+3168) 00:11:04.866 00:11:05.797 QEMU NVMe Ctrl (12340 ): 23152 I/Os completed (+3458) 00:11:05.797 QEMU NVMe Ctrl (12341 ): 23093 I/Os completed (+3713) 00:11:05.797 00:11:06.729 QEMU NVMe Ctrl (12340 ): 26554 I/Os completed (+3402) 00:11:06.729 QEMU NVMe Ctrl (12341 ): 26685 I/Os completed (+3592) 00:11:06.729 00:11:07.670 QEMU NVMe Ctrl (12340 ): 29628 I/Os completed (+3074) 00:11:07.670 QEMU NVMe Ctrl (12341 ): 29838 I/Os completed (+3153) 00:11:07.670 00:11:08.604 QEMU NVMe Ctrl (12340 ): 32669 I/Os completed (+3041) 00:11:08.604 QEMU NVMe Ctrl (12341 ): 33236 I/Os completed (+3398) 00:11:08.604 00:11:09.538 QEMU NVMe Ctrl (12340 ): 35966 I/Os completed (+3297) 00:11:09.538 QEMU NVMe Ctrl (12341 ): 36746 I/Os completed (+3510) 00:11:09.538 00:11:10.909 QEMU NVMe Ctrl (12340 ): 39644 I/Os completed (+3678) 00:11:10.909 QEMU NVMe Ctrl (12341 ): 40402 I/Os completed (+3656) 00:11:10.909 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.909 [2024-11-20 15:59:08.859174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.909 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:10.909 [2024-11-20 15:59:08.860173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.860218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.860233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.860248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:10.909 [2024-11-20 15:59:08.861897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.861939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.861951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.861963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.909 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.909 [2024-11-20 15:59:08.882354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.909 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:10.909 [2024-11-20 15:59:08.883263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.883298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.883316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 [2024-11-20 15:59:08.883329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.909 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:10.910 [2024-11-20 15:59:08.884687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.910 [2024-11-20 15:59:08.884731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.910 [2024-11-20 15:59:08.884745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.910 [2024-11-20 15:59:08.884756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.910 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:10.910 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.910 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:10.910 EAL: Scan for (pci) bus failed. 00:11:10.910 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.910 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.910 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.910 Attaching to 0000:00:10.0 00:11:10.910 Attached to 0000:00:10.0 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.910 15:59:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.910 Attaching to 0000:00:11.0 00:11:10.910 Attached to 0000:00:11.0 00:11:11.840 QEMU NVMe Ctrl (12340 ): 2366 I/Os completed (+2366) 00:11:11.840 QEMU NVMe Ctrl (12341 ): 2091 I/Os completed (+2091) 00:11:11.840 00:11:12.774 QEMU NVMe Ctrl (12340 ): 5472 I/Os completed (+3106) 00:11:12.774 QEMU NVMe Ctrl (12341 ): 5269 I/Os completed (+3178) 00:11:12.774 00:11:13.762 QEMU NVMe Ctrl (12340 ): 8734 I/Os completed (+3262) 00:11:13.762 QEMU NVMe Ctrl (12341 ): 8474 I/Os completed (+3205) 00:11:13.762 00:11:14.701 QEMU NVMe Ctrl (12340 ): 11924 I/Os completed (+3190) 00:11:14.701 QEMU NVMe Ctrl (12341 ): 11645 I/Os completed (+3171) 00:11:14.701 00:11:15.639 QEMU NVMe Ctrl (12340 ): 15144 I/Os completed (+3220) 00:11:15.639 QEMU NVMe Ctrl (12341 ): 14865 I/Os completed (+3220) 00:11:15.639 00:11:16.577 QEMU NVMe Ctrl (12340 ): 18347 I/Os completed (+3203) 00:11:16.577 QEMU NVMe Ctrl (12341 ): 18045 I/Os completed (+3180) 00:11:16.577 00:11:17.955 QEMU NVMe Ctrl (12340 ): 21472 I/Os completed (+3125) 00:11:17.955 QEMU NVMe Ctrl (12341 ): 21190 I/Os completed (+3145) 00:11:17.955 00:11:18.892 QEMU NVMe Ctrl (12340 ): 24581 I/Os completed (+3109) 00:11:18.892 QEMU NVMe Ctrl (12341 ): 24359 I/Os completed (+3169) 00:11:18.892 00:11:19.831 QEMU NVMe Ctrl (12340 ): 27715 I/Os completed (+3134) 00:11:19.831 QEMU NVMe Ctrl (12341 ): 27491 I/Os completed (+3132) 00:11:19.831 00:11:20.809 QEMU NVMe Ctrl (12340 ): 30836 I/Os completed (+3121) 00:11:20.809 QEMU NVMe Ctrl (12341 ): 30624 I/Os completed (+3133) 00:11:20.809 00:11:21.752 QEMU NVMe Ctrl (12340 ): 33976 I/Os completed (+3140) 00:11:21.752 QEMU NVMe Ctrl (12341 ): 33771 I/Os completed (+3147) 00:11:21.752 00:11:22.695 QEMU NVMe Ctrl (12340 ): 37111 I/Os completed (+3135) 00:11:22.695 QEMU NVMe Ctrl (12341 ): 36863 I/Os completed (+3092) 00:11:22.695 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.955 [2024-11-20 15:59:21.102951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:22.955 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:22.955 [2024-11-20 15:59:21.104195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.104248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.104268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.104286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:22.955 [2024-11-20 15:59:21.106139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.106187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.106202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.106216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.955 [2024-11-20 15:59:21.126713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:22.955 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:22.955 [2024-11-20 15:59:21.127802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.127844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.127862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.127877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:22.955 [2024-11-20 15:59:21.131396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.131442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.131463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 [2024-11-20 15:59:21.131477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.955 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:22.955 EAL: Scan for (pci) bus failed. 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:22.955 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.215 Attaching to 0000:00:10.0 00:11:23.215 Attached to 0000:00:10.0 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.215 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.215 Attaching to 0000:00:11.0 00:11:23.215 Attached to 0000:00:11.0 00:11:23.215 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:23.215 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:23.215 [2024-11-20 15:59:21.424033] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:35.438 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:35.438 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.438 15:59:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.83 00:11:35.438 15:59:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.83 00:11:35.438 15:59:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:35.438 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:11:35.438 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:11:35.438 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66750 00:11:42.100 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66750) - No such process 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66750 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67300 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67300 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67300 ']' 00:11:42.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.100 15:59:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.100 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.100 [2024-11-20 15:59:39.501937] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:11:42.100 [2024-11-20 15:59:39.502066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67300 ] 00:11:42.100 [2024-11-20 15:59:39.653924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.100 [2024-11-20 15:59:39.756225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:42.359 15:59:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:42.359 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.941 [2024-11-20 15:59:46.457873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:48.941 [2024-11-20 15:59:46.459537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.459583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.459600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.459622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.459632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.459642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.459651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.459661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.459669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.459683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.459691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.459701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.857874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:48.941 [2024-11-20 15:59:46.859587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.859626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.859641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.859659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.859670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.859679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.859689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.859698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.859708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 [2024-11-20 15:59:46.859717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.941 [2024-11-20 15:59:46.859748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.941 [2024-11-20 15:59:46.859757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.941 15:59:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.941 15:59:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.941 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.941 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.941 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.202 15:59:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.423 [2024-11-20 15:59:59.358129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.423 [2024-11-20 15:59:59.360134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.423 [2024-11-20 15:59:59.360178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.423 [2024-11-20 15:59:59.360191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.423 [2024-11-20 15:59:59.360212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.423 [2024-11-20 15:59:59.360222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.423 [2024-11-20 15:59:59.360233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.423 [2024-11-20 15:59:59.360242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.423 [2024-11-20 15:59:59.360252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.423 [2024-11-20 15:59:59.360261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.423 [2024-11-20 15:59:59.360272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.423 [2024-11-20 15:59:59.360280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.423 [2024-11-20 15:59:59.360290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.423 15:59:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.423 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.684 [2024-11-20 15:59:59.758130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:01.684 [2024-11-20 15:59:59.759816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.684 [2024-11-20 15:59:59.759853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.684 [2024-11-20 15:59:59.759869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.684 [2024-11-20 15:59:59.759889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.684 [2024-11-20 15:59:59.759899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.684 [2024-11-20 15:59:59.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.684 [2024-11-20 15:59:59.759919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.684 [2024-11-20 15:59:59.759927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.684 [2024-11-20 15:59:59.759937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.684 [2024-11-20 15:59:59.759946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.684 [2024-11-20 15:59:59.759955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.684 [2024-11-20 15:59:59.759963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.684 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.684 15:59:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.684 15:59:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.684 15:59:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.946 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.946 15:59:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.946 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.205 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.206 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.206 16:00:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.484 16:00:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:14.484 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.484 [2024-11-20 16:00:12.358405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:14.484 [2024-11-20 16:00:12.360155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.484 [2024-11-20 16:00:12.360200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.484 [2024-11-20 16:00:12.360214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.484 [2024-11-20 16:00:12.360236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.484 [2024-11-20 16:00:12.360246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.484 [2024-11-20 16:00:12.360259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.484 [2024-11-20 16:00:12.360268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.484 [2024-11-20 16:00:12.360277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.485 [2024-11-20 16:00:12.360286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.485 [2024-11-20 16:00:12.360296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.485 [2024-11-20 16:00:12.360305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.485 [2024-11-20 16:00:12.360315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.746 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.747 16:00:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.747 16:00:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.747 [2024-11-20 16:00:12.858394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:14.747 [2024-11-20 16:00:12.860166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.747 [2024-11-20 16:00:12.860209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.747 [2024-11-20 16:00:12.860226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.747 [2024-11-20 16:00:12.860247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.747 [2024-11-20 16:00:12.860259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.747 [2024-11-20 16:00:12.860270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.747 [2024-11-20 16:00:12.860284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.747 [2024-11-20 16:00:12.860294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.747 [2024-11-20 16:00:12.860307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.747 [2024-11-20 16:00:12.860317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.747 [2024-11-20 16:00:12.860329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.747 [2024-11-20 16:00:12.860338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.747 16:00:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.747 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:14.747 16:00:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.320 16:00:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.320 16:00:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.320 16:00:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.320 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.580 16:00:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.36 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.36 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.36 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.36 2 00:12:27.806 remove_attach_helper took 45.36s to complete (handling 2 nvme drive(s)) 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:27.806 16:00:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:27.806 16:00:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.379 16:00:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.379 16:00:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.379 16:00:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:34.379 16:00:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:34.379 [2024-11-20 16:00:31.848676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:34.379 [2024-11-20 16:00:31.850308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:31.850349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:31.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:31.850385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:31.850394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:31.850405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:31.850414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:31.850425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:31.850434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:31.850444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:31.850452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:31.850466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:32.248681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:34.379 [2024-11-20 16:00:32.250294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:32.250335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:32.250350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:32.250368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:32.250379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:32.250388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:32.250399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:32.250408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:32.250419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 [2024-11-20 16:00:32.250428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.379 [2024-11-20 16:00:32.250438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.379 [2024-11-20 16:00:32.250447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.379 16:00:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.379 16:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.379 16:00:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:34.379 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.380 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.380 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.380 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:34.640 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:34.640 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.640 16:00:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.902 [2024-11-20 16:00:44.748942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:46.902 [2024-11-20 16:00:44.750214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.902 [2024-11-20 16:00:44.750257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.902 [2024-11-20 16:00:44.750270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.902 [2024-11-20 16:00:44.750291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.902 [2024-11-20 16:00:44.750301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.902 [2024-11-20 16:00:44.750311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.902 [2024-11-20 16:00:44.750320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.902 [2024-11-20 16:00:44.750330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.902 [2024-11-20 16:00:44.750338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.902 [2024-11-20 16:00:44.750349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.902 [2024-11-20 16:00:44.750357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.902 [2024-11-20 16:00:44.750367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.902 16:00:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:46.902 16:00:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:46.902 [2024-11-20 16:00:45.148946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:47.160 [2024-11-20 16:00:45.150554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.160 [2024-11-20 16:00:45.150595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.160 [2024-11-20 16:00:45.150611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.160 [2024-11-20 16:00:45.150629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.160 [2024-11-20 16:00:45.150644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.160 [2024-11-20 16:00:45.150653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.160 [2024-11-20 16:00:45.150663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.160 [2024-11-20 16:00:45.150672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.160 [2024-11-20 16:00:45.150682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.160 [2024-11-20 16:00:45.150691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.160 [2024-11-20 16:00:45.150701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.160 [2024-11-20 16:00:45.150709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:47.160 16:00:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.160 16:00:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.160 16:00:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.160 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.161 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.416 16:00:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.642 [2024-11-20 16:00:57.649194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:59.642 [2024-11-20 16:00:57.650554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.642 [2024-11-20 16:00:57.650586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.642 [2024-11-20 16:00:57.650598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.642 [2024-11-20 16:00:57.650616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.642 [2024-11-20 16:00:57.650624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.642 [2024-11-20 16:00:57.650633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.642 [2024-11-20 16:00:57.650640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.642 [2024-11-20 16:00:57.650651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.642 [2024-11-20 16:00:57.650658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.642 [2024-11-20 16:00:57.650666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.642 [2024-11-20 16:00:57.650673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.642 [2024-11-20 16:00:57.650681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.642 16:00:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:59.642 16:00:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:00.207 [2024-11-20 16:00:58.149214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:00.207 [2024-11-20 16:00:58.151959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.207 [2024-11-20 16:00:58.151998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.207 [2024-11-20 16:00:58.152010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.207 [2024-11-20 16:00:58.152025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.207 [2024-11-20 16:00:58.152034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.207 [2024-11-20 16:00:58.152042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.207 [2024-11-20 16:00:58.152051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.207 [2024-11-20 16:00:58.152058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.207 [2024-11-20 16:00:58.152066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.207 [2024-11-20 16:00:58.152073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:00.207 [2024-11-20 16:00:58.152083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.207 [2024-11-20 16:00:58.152090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:00.207 16:00:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.207 16:00:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:00.207 16:00:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:00.207 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:00.465 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.465 16:00:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.74 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.74 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.74 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.74 2 00:13:12.785 remove_attach_helper took 44.74s to complete (handling 2 nvme drive(s)) 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:12.785 16:01:10 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67300 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67300 ']' 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67300 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67300 00:13:12.785 16:01:10 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.786 16:01:10 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.786 killing process with pid 67300 00:13:12.786 16:01:10 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67300' 00:13:12.786 16:01:10 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67300 00:13:12.786 16:01:10 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67300 00:13:13.718 16:01:11 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:13.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.235 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.235 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.493 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.493 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.493 00:13:14.493 real 2m29.302s 00:13:14.493 user 1m50.851s 00:13:14.493 sys 0m17.169s 00:13:14.493 16:01:12 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.493 16:01:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:14.493 ************************************ 00:13:14.493 END TEST sw_hotplug 00:13:14.493 ************************************ 00:13:14.493 16:01:12 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:14.493 16:01:12 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:14.493 16:01:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.493 16:01:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.493 16:01:12 -- common/autotest_common.sh@10 -- # set +x 00:13:14.493 ************************************ 00:13:14.493 START TEST nvme_xnvme 00:13:14.493 ************************************ 00:13:14.493 16:01:12 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:14.493 * Looking for test storage... 00:13:14.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.753 16:01:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.753 --rc genhtml_branch_coverage=1 00:13:14.753 --rc genhtml_function_coverage=1 00:13:14.753 --rc genhtml_legend=1 00:13:14.753 --rc geninfo_all_blocks=1 00:13:14.753 --rc geninfo_unexecuted_blocks=1 00:13:14.753 00:13:14.753 ' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.753 --rc genhtml_branch_coverage=1 00:13:14.753 --rc genhtml_function_coverage=1 00:13:14.753 --rc genhtml_legend=1 00:13:14.753 --rc geninfo_all_blocks=1 00:13:14.753 --rc geninfo_unexecuted_blocks=1 00:13:14.753 00:13:14.753 ' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.753 --rc genhtml_branch_coverage=1 00:13:14.753 --rc genhtml_function_coverage=1 00:13:14.753 --rc genhtml_legend=1 00:13:14.753 --rc geninfo_all_blocks=1 00:13:14.753 --rc geninfo_unexecuted_blocks=1 00:13:14.753 00:13:14.753 ' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.753 --rc genhtml_branch_coverage=1 00:13:14.753 --rc genhtml_function_coverage=1 00:13:14.753 --rc genhtml_legend=1 00:13:14.753 --rc geninfo_all_blocks=1 00:13:14.753 --rc geninfo_unexecuted_blocks=1 00:13:14.753 00:13:14.753 ' 00:13:14.753 16:01:12 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:14.753 16:01:12 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:14.753 16:01:12 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:14.753 16:01:12 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:14.754 16:01:12 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:14.754 16:01:12 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:14.754 #define SPDK_CONFIG_H 00:13:14.754 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:14.754 #define SPDK_CONFIG_APPS 1 00:13:14.754 #define SPDK_CONFIG_ARCH native 00:13:14.754 #define SPDK_CONFIG_ASAN 1 00:13:14.754 #undef SPDK_CONFIG_AVAHI 00:13:14.754 #undef SPDK_CONFIG_CET 00:13:14.754 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:14.754 #define SPDK_CONFIG_COVERAGE 1 00:13:14.754 #define SPDK_CONFIG_CROSS_PREFIX 00:13:14.754 #undef SPDK_CONFIG_CRYPTO 00:13:14.754 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:14.754 #undef SPDK_CONFIG_CUSTOMOCF 00:13:14.754 #undef SPDK_CONFIG_DAOS 00:13:14.754 #define SPDK_CONFIG_DAOS_DIR 00:13:14.754 #define SPDK_CONFIG_DEBUG 1 00:13:14.754 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:14.754 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:14.754 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:14.754 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:14.754 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:14.754 #undef SPDK_CONFIG_DPDK_UADK 00:13:14.754 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:14.754 #define SPDK_CONFIG_EXAMPLES 1 00:13:14.754 #undef SPDK_CONFIG_FC 00:13:14.754 #define SPDK_CONFIG_FC_PATH 00:13:14.754 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:14.754 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:14.754 #define SPDK_CONFIG_FSDEV 1 00:13:14.754 #undef SPDK_CONFIG_FUSE 00:13:14.754 #undef SPDK_CONFIG_FUZZER 00:13:14.754 #define SPDK_CONFIG_FUZZER_LIB 00:13:14.754 #undef SPDK_CONFIG_GOLANG 00:13:14.754 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:14.754 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:14.754 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:14.754 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:14.754 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:14.754 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:14.754 #undef SPDK_CONFIG_HAVE_LZ4 00:13:14.754 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:14.754 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:14.754 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:14.754 #define SPDK_CONFIG_IDXD 1 00:13:14.754 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:14.754 #undef SPDK_CONFIG_IPSEC_MB 00:13:14.754 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:14.754 #define SPDK_CONFIG_ISAL 1 00:13:14.754 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:14.754 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:14.754 #define SPDK_CONFIG_LIBDIR 00:13:14.754 #undef SPDK_CONFIG_LTO 00:13:14.754 #define SPDK_CONFIG_MAX_LCORES 128 00:13:14.754 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:14.754 #define SPDK_CONFIG_NVME_CUSE 1 00:13:14.754 #undef SPDK_CONFIG_OCF 00:13:14.754 #define SPDK_CONFIG_OCF_PATH 00:13:14.754 #define SPDK_CONFIG_OPENSSL_PATH 00:13:14.754 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:14.754 #define SPDK_CONFIG_PGO_DIR 00:13:14.754 #undef SPDK_CONFIG_PGO_USE 00:13:14.754 #define SPDK_CONFIG_PREFIX /usr/local 00:13:14.754 #undef SPDK_CONFIG_RAID5F 00:13:14.754 #undef SPDK_CONFIG_RBD 00:13:14.754 #define SPDK_CONFIG_RDMA 1 00:13:14.754 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:14.754 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:14.754 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:14.754 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:14.754 #define SPDK_CONFIG_SHARED 1 00:13:14.754 #undef SPDK_CONFIG_SMA 00:13:14.754 #define SPDK_CONFIG_TESTS 1 00:13:14.754 #undef SPDK_CONFIG_TSAN 00:13:14.754 #define SPDK_CONFIG_UBLK 1 00:13:14.754 #define SPDK_CONFIG_UBSAN 1 00:13:14.754 #undef SPDK_CONFIG_UNIT_TESTS 00:13:14.754 #undef SPDK_CONFIG_URING 00:13:14.754 #define SPDK_CONFIG_URING_PATH 00:13:14.754 #undef SPDK_CONFIG_URING_ZNS 00:13:14.754 #undef SPDK_CONFIG_USDT 00:13:14.754 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:14.754 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:14.754 #undef SPDK_CONFIG_VFIO_USER 00:13:14.754 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:14.754 #define SPDK_CONFIG_VHOST 1 00:13:14.754 #define SPDK_CONFIG_VIRTIO 1 00:13:14.754 #undef SPDK_CONFIG_VTUNE 00:13:14.754 #define SPDK_CONFIG_VTUNE_DIR 00:13:14.754 #define SPDK_CONFIG_WERROR 1 00:13:14.754 #define SPDK_CONFIG_WPDK_DIR 00:13:14.754 #define SPDK_CONFIG_XNVME 1 00:13:14.754 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:14.754 16:01:12 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:14.754 16:01:12 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.754 16:01:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.754 16:01:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.754 16:01:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.754 16:01:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.754 16:01:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.754 16:01:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 16:01:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 16:01:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:14.755 16:01:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:14.755 16:01:12 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:14.755 16:01:12 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68654 ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68654 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.itAHyw 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.itAHyw/tests/xnvme /tmp/spdk.itAHyw 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:14.756 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977051136 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591064576 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260621312 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493358080 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506153984 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977051136 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591064576 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=90356514816 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=9346265088 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:14.757 * Looking for test storage... 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13977051136 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.757 16:01:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:14.757 16:01:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:14.758 16:01:12 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.758 16:01:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.758 --rc genhtml_branch_coverage=1 00:13:14.758 --rc genhtml_function_coverage=1 00:13:14.758 --rc genhtml_legend=1 00:13:14.758 --rc geninfo_all_blocks=1 00:13:14.758 --rc geninfo_unexecuted_blocks=1 00:13:14.758 00:13:14.758 ' 00:13:14.758 16:01:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.758 --rc genhtml_branch_coverage=1 00:13:14.758 --rc genhtml_function_coverage=1 00:13:14.758 --rc genhtml_legend=1 00:13:14.758 --rc geninfo_all_blocks=1 00:13:14.758 --rc geninfo_unexecuted_blocks=1 00:13:14.758 00:13:14.758 ' 00:13:14.758 16:01:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.758 --rc genhtml_branch_coverage=1 00:13:14.758 --rc genhtml_function_coverage=1 00:13:14.758 --rc genhtml_legend=1 00:13:14.758 --rc geninfo_all_blocks=1 00:13:14.758 --rc geninfo_unexecuted_blocks=1 00:13:14.758 00:13:14.758 ' 00:13:14.758 16:01:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.758 --rc genhtml_branch_coverage=1 00:13:14.758 --rc genhtml_function_coverage=1 00:13:14.758 --rc genhtml_legend=1 00:13:14.758 --rc geninfo_all_blocks=1 00:13:14.758 --rc geninfo_unexecuted_blocks=1 00:13:14.758 00:13:14.758 ' 00:13:14.758 16:01:12 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.758 16:01:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.758 16:01:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.758 16:01:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.758 16:01:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.758 16:01:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:14.758 16:01:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:14.758 16:01:12 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:15.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:15.275 Waiting for block devices as requested 00:13:15.275 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.275 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.535 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.535 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.859 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.859 16:01:18 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:20.859 16:01:19 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:20.859 16:01:19 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:21.120 16:01:19 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:21.120 No valid GPT data, bailing 00:13:21.120 16:01:19 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:21.120 16:01:19 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:21.120 16:01:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:21.120 16:01:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.120 16:01:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.120 16:01:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.120 ************************************ 00:13:21.120 START TEST xnvme_rpc 00:13:21.120 ************************************ 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69035 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69035 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69035 ']' 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.120 16:01:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.120 [2024-11-20 16:01:19.333189] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:21.120 [2024-11-20 16:01:19.333315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69035 ] 00:13:21.378 [2024-11-20 16:01:19.495930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.378 [2024-11-20 16:01:19.597855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 xnvme_bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69035 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69035 ']' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69035 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69035 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.310 killing process with pid 69035 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69035' 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69035 00:13:22.310 16:01:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69035 00:13:24.244 00:13:24.244 real 0m2.724s 00:13:24.244 user 0m2.871s 00:13:24.244 sys 0m0.370s 00:13:24.244 16:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.244 16:01:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.244 ************************************ 00:13:24.244 END TEST xnvme_rpc 00:13:24.244 ************************************ 00:13:24.244 16:01:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:24.244 16:01:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.244 16:01:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.244 16:01:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.244 ************************************ 00:13:24.244 START TEST xnvme_bdevperf 00:13:24.244 ************************************ 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.244 16:01:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.244 { 00:13:24.244 "subsystems": [ 00:13:24.244 { 00:13:24.244 "subsystem": "bdev", 00:13:24.244 "config": [ 00:13:24.244 { 00:13:24.244 "params": { 00:13:24.244 "io_mechanism": "libaio", 00:13:24.244 "conserve_cpu": false, 00:13:24.244 "filename": "/dev/nvme0n1", 00:13:24.244 "name": "xnvme_bdev" 00:13:24.244 }, 00:13:24.244 "method": "bdev_xnvme_create" 00:13:24.244 }, 00:13:24.244 { 00:13:24.244 "method": "bdev_wait_for_examine" 00:13:24.244 } 00:13:24.244 ] 00:13:24.244 } 00:13:24.244 ] 00:13:24.244 } 00:13:24.244 [2024-11-20 16:01:22.086407] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:24.244 [2024-11-20 16:01:22.086536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69104 ] 00:13:24.244 [2024-11-20 16:01:22.247064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.244 [2024-11-20 16:01:22.348081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.503 Running I/O for 5 seconds... 00:13:26.370 37737.00 IOPS, 147.41 MiB/s [2024-11-20T16:01:25.994Z] 38804.00 IOPS, 151.58 MiB/s [2024-11-20T16:01:26.925Z] 38595.67 IOPS, 150.76 MiB/s [2024-11-20T16:01:27.859Z] 38464.50 IOPS, 150.25 MiB/s 00:13:29.609 Latency(us) 00:13:29.609 [2024-11-20T16:01:27.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.609 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:29.609 xnvme_bdev : 5.00 38312.95 149.66 0.00 0.00 1665.93 187.47 6805.66 00:13:29.609 [2024-11-20T16:01:27.859Z] =================================================================================================================== 00:13:29.609 [2024-11-20T16:01:27.859Z] Total : 38312.95 149.66 0.00 0.00 1665.93 187.47 6805.66 00:13:30.175 16:01:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.175 16:01:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:30.175 16:01:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:30.175 16:01:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:30.175 16:01:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.175 { 00:13:30.175 "subsystems": [ 00:13:30.175 { 00:13:30.175 "subsystem": "bdev", 00:13:30.175 "config": [ 00:13:30.175 { 00:13:30.175 "params": { 00:13:30.175 "io_mechanism": "libaio", 00:13:30.175 "conserve_cpu": false, 00:13:30.175 "filename": "/dev/nvme0n1", 00:13:30.175 "name": "xnvme_bdev" 00:13:30.175 }, 00:13:30.175 "method": "bdev_xnvme_create" 00:13:30.175 }, 00:13:30.175 { 00:13:30.175 "method": "bdev_wait_for_examine" 00:13:30.175 } 00:13:30.175 ] 00:13:30.175 } 00:13:30.175 ] 00:13:30.175 } 00:13:30.175 [2024-11-20 16:01:28.393711] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:30.175 [2024-11-20 16:01:28.393844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69179 ] 00:13:30.431 [2024-11-20 16:01:28.554619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.431 [2024-11-20 16:01:28.655906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.688 Running I/O for 5 seconds... 00:13:33.013 36855.00 IOPS, 143.96 MiB/s [2024-11-20T16:01:32.198Z] 38592.50 IOPS, 150.75 MiB/s [2024-11-20T16:01:33.139Z] 37944.67 IOPS, 148.22 MiB/s [2024-11-20T16:01:34.076Z] 33208.75 IOPS, 129.72 MiB/s [2024-11-20T16:01:34.077Z] 29669.60 IOPS, 115.90 MiB/s 00:13:35.827 Latency(us) 00:13:35.827 [2024-11-20T16:01:34.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.827 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:35.827 xnvme_bdev : 5.03 29530.54 115.35 0.00 0.00 2157.18 58.29 66947.54 00:13:35.827 [2024-11-20T16:01:34.077Z] =================================================================================================================== 00:13:35.827 [2024-11-20T16:01:34.077Z] Total : 29530.54 115.35 0.00 0.00 2157.18 58.29 66947.54 00:13:36.764 ************************************ 00:13:36.764 END TEST xnvme_bdevperf 00:13:36.764 ************************************ 00:13:36.764 00:13:36.764 real 0m12.648s 00:13:36.764 user 0m5.404s 00:13:36.764 sys 0m5.020s 00:13:36.764 16:01:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.764 16:01:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.764 16:01:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:36.764 16:01:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.764 16:01:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.764 16:01:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.764 ************************************ 00:13:36.764 START TEST xnvme_fio_plugin 00:13:36.764 ************************************ 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:36.764 16:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.764 { 00:13:36.764 "subsystems": [ 00:13:36.764 { 00:13:36.764 "subsystem": "bdev", 00:13:36.764 "config": [ 00:13:36.764 { 00:13:36.764 "params": { 00:13:36.764 "io_mechanism": "libaio", 00:13:36.764 "conserve_cpu": false, 00:13:36.764 "filename": "/dev/nvme0n1", 00:13:36.764 "name": "xnvme_bdev" 00:13:36.764 }, 00:13:36.764 "method": "bdev_xnvme_create" 00:13:36.764 }, 00:13:36.764 { 00:13:36.764 "method": "bdev_wait_for_examine" 00:13:36.764 } 00:13:36.764 ] 00:13:36.764 } 00:13:36.764 ] 00:13:36.764 } 00:13:36.764 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:36.764 fio-3.35 00:13:36.764 Starting 1 thread 00:13:43.328 00:13:43.328 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69298: Wed Nov 20 16:01:40 2024 00:13:43.328 read: IOPS=41.0k, BW=160MiB/s (168MB/s)(801MiB/5001msec) 00:13:43.328 slat (usec): min=3, max=2522, avg=19.46, stdev=49.57 00:13:43.328 clat (usec): min=4, max=28359, avg=1037.56, stdev=692.77 00:13:43.328 lat (usec): min=39, max=28370, avg=1057.01, stdev=694.52 00:13:43.328 clat percentiles (usec): 00:13:43.328 | 1.00th=[ 182], 5.00th=[ 297], 10.00th=[ 400], 20.00th=[ 537], 00:13:43.328 | 30.00th=[ 660], 40.00th=[ 783], 50.00th=[ 898], 60.00th=[ 1037], 00:13:43.328 | 70.00th=[ 1205], 80.00th=[ 1434], 90.00th=[ 1811], 95.00th=[ 2212], 00:13:43.328 | 99.00th=[ 3163], 99.50th=[ 3621], 99.90th=[ 6456], 99.95th=[ 9241], 00:13:43.328 | 99.99th=[14484] 00:13:43.328 bw ( KiB/s): min=146880, max=205620, per=100.00%, avg=165654.67, stdev=18575.68, samples=9 00:13:43.328 iops : min=36720, max=51405, avg=41413.67, stdev=4643.92, samples=9 00:13:43.328 lat (usec) : 10=0.01%, 50=0.01%, 100=0.07%, 250=2.98%, 500=13.88% 00:13:43.328 lat (usec) : 750=20.63%, 1000=19.95% 00:13:43.328 lat (msec) : 2=35.31%, 4=6.80%, 10=0.31%, 20=0.04%, 50=0.01% 00:13:43.328 cpu : usr=33.78%, sys=50.74%, ctx=113, majf=0, minf=764 00:13:43.328 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=7.9%, 16=23.9%, 32=62.9%, >=64=2.2% 00:13:43.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.328 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:43.328 issued rwts: total=205045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.328 00:13:43.328 Run status group 0 (all jobs): 00:13:43.328 READ: bw=160MiB/s (168MB/s), 160MiB/s-160MiB/s (168MB/s-168MB/s), io=801MiB (840MB), run=5001-5001msec 00:13:43.328 ----------------------------------------------------- 00:13:43.328 Suppressions used: 00:13:43.328 count bytes template 00:13:43.328 1 11 /usr/src/fio/parse.c 00:13:43.328 1 8 libtcmalloc_minimal.so 00:13:43.328 1 904 libcrypto.so 00:13:43.328 ----------------------------------------------------- 00:13:43.328 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.328 16:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.328 { 00:13:43.328 "subsystems": [ 00:13:43.328 { 00:13:43.328 "subsystem": "bdev", 00:13:43.328 "config": [ 00:13:43.328 { 00:13:43.328 "params": { 00:13:43.328 "io_mechanism": "libaio", 00:13:43.329 "conserve_cpu": false, 00:13:43.329 "filename": "/dev/nvme0n1", 00:13:43.329 "name": "xnvme_bdev" 00:13:43.329 }, 00:13:43.329 "method": "bdev_xnvme_create" 00:13:43.329 }, 00:13:43.329 { 00:13:43.329 "method": "bdev_wait_for_examine" 00:13:43.329 } 00:13:43.329 ] 00:13:43.329 } 00:13:43.329 ] 00:13:43.329 } 00:13:43.587 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.587 fio-3.35 00:13:43.587 Starting 1 thread 00:13:50.142 00:13:50.142 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69390: Wed Nov 20 16:01:47 2024 00:13:50.142 write: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(330MiB/5002msec); 0 zone resets 00:13:50.142 slat (usec): min=3, max=894, avg=18.71, stdev=32.75 00:13:50.142 clat (usec): min=5, max=97302, avg=3623.79, stdev=4064.22 00:13:50.142 lat (usec): min=46, max=97331, avg=3642.51, stdev=4060.21 00:13:50.142 clat percentiles (usec): 00:13:50.142 | 1.00th=[ 47], 5.00th=[ 145], 10.00th=[ 273], 20.00th=[ 1582], 00:13:50.142 | 30.00th=[ 2671], 40.00th=[ 3195], 50.00th=[ 3654], 60.00th=[ 4080], 00:13:50.142 | 70.00th=[ 4490], 80.00th=[ 5014], 90.00th=[ 5800], 95.00th=[ 6456], 00:13:50.142 | 99.00th=[ 8979], 99.50th=[12518], 99.90th=[86508], 99.95th=[90702], 00:13:50.142 | 99.99th=[96994] 00:13:50.142 bw ( KiB/s): min=58808, max=70552, per=98.51%, avg=66520.00, stdev=3630.46, samples=9 00:13:50.142 iops : min=14702, max=17638, avg=16630.00, stdev=907.61, samples=9 00:13:50.142 lat (usec) : 10=0.02%, 20=0.18%, 50=0.87%, 100=1.96%, 250=6.32% 00:13:50.142 lat (usec) : 500=4.75%, 750=2.20%, 1000=1.49% 00:13:50.142 lat (msec) : 2=4.57%, 4=35.94%, 10=40.97%, 20=0.52%, 50=0.08% 00:13:50.142 lat (msec) : 100=0.15% 00:13:50.142 cpu : usr=67.01%, sys=18.68%, ctx=24, majf=0, minf=765 00:13:50.142 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.7%, 16=2.4%, 32=78.7%, >=64=17.8% 00:13:50.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.142 complete : 0=0.0%, 4=94.4%, 8=2.7%, 16=2.0%, 32=0.7%, 64=0.2%, >=64=0.0% 00:13:50.142 issued rwts: total=0,84444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.142 00:13:50.142 Run status group 0 (all jobs): 00:13:50.142 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=330MiB (346MB), run=5002-5002msec 00:13:50.142 ----------------------------------------------------- 00:13:50.142 Suppressions used: 00:13:50.142 count bytes template 00:13:50.142 1 11 /usr/src/fio/parse.c 00:13:50.142 1 8 libtcmalloc_minimal.so 00:13:50.142 1 904 libcrypto.so 00:13:50.142 ----------------------------------------------------- 00:13:50.142 00:13:50.142 00:13:50.142 real 0m13.576s 00:13:50.142 user 0m7.740s 00:13:50.142 sys 0m3.961s 00:13:50.142 16:01:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.142 ************************************ 00:13:50.142 END TEST xnvme_fio_plugin 00:13:50.142 ************************************ 00:13:50.142 16:01:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 16:01:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:50.142 16:01:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:50.142 16:01:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:50.142 16:01:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:50.142 16:01:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.142 16:01:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.142 16:01:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 ************************************ 00:13:50.142 START TEST xnvme_rpc 00:13:50.142 ************************************ 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:50.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69476 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69476 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69476 ']' 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 16:01:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:50.402 [2024-11-20 16:01:48.432735] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:50.402 [2024-11-20 16:01:48.432857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69476 ] 00:13:50.402 [2024-11-20 16:01:48.592843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.661 [2024-11-20 16:01:48.695849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 xnvme_bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69476 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69476 ']' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69476 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.235 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69476 00:13:51.497 killing process with pid 69476 00:13:51.497 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.497 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.497 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69476' 00:13:51.497 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69476 00:13:51.497 16:01:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69476 00:13:52.927 00:13:52.927 real 0m2.624s 00:13:52.927 user 0m2.710s 00:13:52.928 sys 0m0.356s 00:13:52.928 16:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.928 16:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.928 ************************************ 00:13:52.928 END TEST xnvme_rpc 00:13:52.928 ************************************ 00:13:52.928 16:01:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:52.928 16:01:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:52.928 16:01:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.928 16:01:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.928 ************************************ 00:13:52.928 START TEST xnvme_bdevperf 00:13:52.928 ************************************ 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:52.928 16:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:52.928 { 00:13:52.928 "subsystems": [ 00:13:52.928 { 00:13:52.928 "subsystem": "bdev", 00:13:52.928 "config": [ 00:13:52.928 { 00:13:52.928 "params": { 00:13:52.928 "io_mechanism": "libaio", 00:13:52.928 "conserve_cpu": true, 00:13:52.928 "filename": "/dev/nvme0n1", 00:13:52.928 "name": "xnvme_bdev" 00:13:52.928 }, 00:13:52.928 "method": "bdev_xnvme_create" 00:13:52.928 }, 00:13:52.928 { 00:13:52.928 "method": "bdev_wait_for_examine" 00:13:52.928 } 00:13:52.928 ] 00:13:52.928 } 00:13:52.928 ] 00:13:52.928 } 00:13:52.928 [2024-11-20 16:01:51.095383] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:52.928 [2024-11-20 16:01:51.095498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69539 ] 00:13:53.186 [2024-11-20 16:01:51.256049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.186 [2024-11-20 16:01:51.355533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.443 Running I/O for 5 seconds... 00:13:55.777 38386.00 IOPS, 149.95 MiB/s [2024-11-20T16:01:54.968Z] 37262.50 IOPS, 145.56 MiB/s [2024-11-20T16:01:55.905Z] 36607.33 IOPS, 143.00 MiB/s [2024-11-20T16:01:56.842Z] 35669.50 IOPS, 139.33 MiB/s [2024-11-20T16:01:56.842Z] 35360.00 IOPS, 138.12 MiB/s 00:13:58.592 Latency(us) 00:13:58.592 [2024-11-20T16:01:56.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.592 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:58.592 xnvme_bdev : 5.01 35322.40 137.98 0.00 0.00 1806.10 36.04 62914.56 00:13:58.592 [2024-11-20T16:01:56.842Z] =================================================================================================================== 00:13:58.592 [2024-11-20T16:01:56.842Z] Total : 35322.40 137.98 0.00 0.00 1806.10 36.04 62914.56 00:13:59.158 16:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.158 16:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:59.158 16:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:59.158 16:01:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.158 16:01:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.158 { 00:13:59.158 "subsystems": [ 00:13:59.158 { 00:13:59.158 "subsystem": "bdev", 00:13:59.158 "config": [ 00:13:59.158 { 00:13:59.158 "params": { 00:13:59.158 "io_mechanism": "libaio", 00:13:59.158 "conserve_cpu": true, 00:13:59.158 "filename": "/dev/nvme0n1", 00:13:59.158 "name": "xnvme_bdev" 00:13:59.158 }, 00:13:59.158 "method": "bdev_xnvme_create" 00:13:59.158 }, 00:13:59.158 { 00:13:59.158 "method": "bdev_wait_for_examine" 00:13:59.158 } 00:13:59.158 ] 00:13:59.158 } 00:13:59.158 ] 00:13:59.158 } 00:13:59.158 [2024-11-20 16:01:57.277135] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:59.158 [2024-11-20 16:01:57.277254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69614 ] 00:13:59.415 [2024-11-20 16:01:57.432049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.415 [2024-11-20 16:01:57.516087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.671 Running I/O for 5 seconds... 00:14:01.537 8684.00 IOPS, 33.92 MiB/s [2024-11-20T16:02:01.206Z] 7679.00 IOPS, 30.00 MiB/s [2024-11-20T16:02:01.770Z] 7118.00 IOPS, 27.80 MiB/s [2024-11-20T16:02:03.141Z] 6945.50 IOPS, 27.13 MiB/s [2024-11-20T16:02:03.141Z] 6947.80 IOPS, 27.14 MiB/s 00:14:04.891 Latency(us) 00:14:04.891 [2024-11-20T16:02:03.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.891 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:04.891 xnvme_bdev : 5.01 6941.71 27.12 0.00 0.00 9202.26 43.32 253271.43 00:14:04.891 [2024-11-20T16:02:03.141Z] =================================================================================================================== 00:14:04.891 [2024-11-20T16:02:03.141Z] Total : 6941.71 27.12 0.00 0.00 9202.26 43.32 253271.43 00:14:05.455 00:14:05.455 real 0m12.445s 00:14:05.455 user 0m7.686s 00:14:05.455 sys 0m3.355s 00:14:05.455 16:02:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.455 ************************************ 00:14:05.455 END TEST xnvme_bdevperf 00:14:05.455 ************************************ 00:14:05.455 16:02:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 16:02:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:05.455 16:02:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.455 16:02:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.455 16:02:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 ************************************ 00:14:05.455 START TEST xnvme_fio_plugin 00:14:05.455 ************************************ 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.455 16:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.455 { 00:14:05.455 "subsystems": [ 00:14:05.455 { 00:14:05.455 "subsystem": "bdev", 00:14:05.455 "config": [ 00:14:05.455 { 00:14:05.455 "params": { 00:14:05.455 "io_mechanism": "libaio", 00:14:05.455 "conserve_cpu": true, 00:14:05.455 "filename": "/dev/nvme0n1", 00:14:05.455 "name": "xnvme_bdev" 00:14:05.455 }, 00:14:05.455 "method": "bdev_xnvme_create" 00:14:05.455 }, 00:14:05.455 { 00:14:05.455 "method": "bdev_wait_for_examine" 00:14:05.455 } 00:14:05.455 ] 00:14:05.455 } 00:14:05.455 ] 00:14:05.455 } 00:14:05.455 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:05.455 fio-3.35 00:14:05.455 Starting 1 thread 00:14:12.005 00:14:12.005 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69733: Wed Nov 20 16:02:09 2024 00:14:12.005 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(863MiB/5012msec) 00:14:12.005 slat (usec): min=4, max=994, avg=19.16, stdev=24.81 00:14:12.005 clat (usec): min=7, max=20713, avg=872.15, stdev=592.90 00:14:12.005 lat (usec): min=37, max=20760, avg=891.31, stdev=595.54 00:14:12.005 clat percentiles (usec): 00:14:12.005 | 1.00th=[ 159], 5.00th=[ 243], 10.00th=[ 326], 20.00th=[ 453], 00:14:12.005 | 30.00th=[ 562], 40.00th=[ 660], 50.00th=[ 758], 60.00th=[ 873], 00:14:12.005 | 70.00th=[ 1004], 80.00th=[ 1188], 90.00th=[ 1483], 95.00th=[ 1844], 00:14:12.005 | 99.00th=[ 2900], 99.50th=[ 3392], 99.90th=[ 5276], 99.95th=[ 6194], 00:14:12.005 | 99.99th=[13698] 00:14:12.005 bw ( KiB/s): min=157152, max=226008, per=100.00%, avg=176678.40, stdev=20355.70, samples=10 00:14:12.005 iops : min=39288, max=56502, avg=44169.60, stdev=5088.92, samples=10 00:14:12.005 lat (usec) : 10=0.01%, 20=0.01%, 50=0.06%, 100=0.10%, 250=5.29% 00:14:12.005 lat (usec) : 500=18.83%, 750=24.95%, 1000=20.41% 00:14:12.005 lat (msec) : 2=26.38%, 4=3.71%, 10=0.25%, 20=0.02%, 50=0.01% 00:14:12.005 cpu : usr=29.75%, sys=50.33%, ctx=73, majf=0, minf=764 00:14:12.005 IO depths : 1=0.2%, 2=1.5%, 4=4.5%, 8=10.7%, 16=24.7%, 32=56.5%, >=64=2.0% 00:14:12.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.005 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:12.005 issued rwts: total=220885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.005 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:12.005 00:14:12.005 Run status group 0 (all jobs): 00:14:12.005 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=863MiB (905MB), run=5012-5012msec 00:14:12.261 ----------------------------------------------------- 00:14:12.261 Suppressions used: 00:14:12.261 count bytes template 00:14:12.261 1 11 /usr/src/fio/parse.c 00:14:12.261 1 8 libtcmalloc_minimal.so 00:14:12.261 1 904 libcrypto.so 00:14:12.261 ----------------------------------------------------- 00:14:12.261 00:14:12.261 16:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.261 16:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.261 16:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:12.262 16:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.262 { 00:14:12.262 "subsystems": [ 00:14:12.262 { 00:14:12.262 "subsystem": "bdev", 00:14:12.262 "config": [ 00:14:12.262 { 00:14:12.262 "params": { 00:14:12.262 "io_mechanism": "libaio", 00:14:12.262 "conserve_cpu": true, 00:14:12.262 "filename": "/dev/nvme0n1", 00:14:12.262 "name": "xnvme_bdev" 00:14:12.262 }, 00:14:12.262 "method": "bdev_xnvme_create" 00:14:12.262 }, 00:14:12.262 { 00:14:12.262 "method": "bdev_wait_for_examine" 00:14:12.262 } 00:14:12.262 ] 00:14:12.262 } 00:14:12.262 ] 00:14:12.262 } 00:14:12.262 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:12.262 fio-3.35 00:14:12.262 Starting 1 thread 00:14:18.831 00:14:18.831 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69825: Wed Nov 20 16:02:16 2024 00:14:18.831 write: IOPS=30.5k, BW=119MiB/s (125MB/s)(596MiB/5004msec); 0 zone resets 00:14:18.831 slat (usec): min=4, max=1475, avg=18.99, stdev=53.13 00:14:18.831 clat (usec): min=8, max=182169, avg=1629.06, stdev=6343.60 00:14:18.831 lat (usec): min=47, max=182174, avg=1648.05, stdev=6343.27 00:14:18.831 clat percentiles (usec): 00:14:18.831 | 1.00th=[ 127], 5.00th=[ 239], 10.00th=[ 326], 20.00th=[ 490], 00:14:18.831 | 30.00th=[ 644], 40.00th=[ 783], 50.00th=[ 914], 60.00th=[ 1057], 00:14:18.831 | 70.00th=[ 1254], 80.00th=[ 1565], 90.00th=[ 2868], 95.00th=[ 5407], 00:14:18.831 | 99.00th=[ 8455], 99.50th=[ 9896], 99.90th=[154141], 99.95th=[177210], 00:14:18.831 | 99.99th=[181404] 00:14:18.831 bw ( KiB/s): min=56232, max=174376, per=100.00%, avg=127675.56, stdev=38433.79, samples=9 00:14:18.831 iops : min=14058, max=43594, avg=31918.89, stdev=9608.45, samples=9 00:14:18.831 lat (usec) : 10=0.01%, 20=0.02%, 50=0.15%, 100=0.46%, 250=4.96% 00:14:18.831 lat (usec) : 500=14.97%, 750=17.23%, 1000=18.06% 00:14:18.831 lat (msec) : 2=29.63%, 4=7.17%, 10=6.87%, 20=0.28%, 50=0.04% 00:14:18.831 lat (msec) : 100=0.04%, 250=0.13% 00:14:18.831 cpu : usr=50.11%, sys=37.24%, ctx=28, majf=0, minf=765 00:14:18.831 IO depths : 1=0.2%, 2=0.8%, 4=2.7%, 8=7.2%, 16=18.6%, 32=66.5%, >=64=4.0% 00:14:18.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.831 complete : 0=0.0%, 4=97.2%, 8=0.5%, 16=0.5%, 32=0.6%, 64=1.3%, >=64=0.0% 00:14:18.831 issued rwts: total=0,152631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:18.831 00:14:18.831 Run status group 0 (all jobs): 00:14:18.831 WRITE: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=596MiB (625MB), run=5004-5004msec 00:14:18.831 ----------------------------------------------------- 00:14:18.831 Suppressions used: 00:14:18.831 count bytes template 00:14:18.831 1 11 /usr/src/fio/parse.c 00:14:18.831 1 8 libtcmalloc_minimal.so 00:14:18.831 1 904 libcrypto.so 00:14:18.831 ----------------------------------------------------- 00:14:18.831 00:14:19.090 ************************************ 00:14:19.090 END TEST xnvme_fio_plugin 00:14:19.090 ************************************ 00:14:19.090 00:14:19.090 real 0m13.566s 00:14:19.090 user 0m6.658s 00:14:19.090 sys 0m4.869s 00:14:19.090 16:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.090 16:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.090 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:19.090 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:19.091 16:02:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:19.091 16:02:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.091 16:02:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.091 16:02:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.091 ************************************ 00:14:19.091 START TEST xnvme_rpc 00:14:19.091 ************************************ 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69906 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69906 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69906 ']' 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.091 16:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.091 [2024-11-20 16:02:17.226141] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:19.091 [2024-11-20 16:02:17.226265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69906 ] 00:14:19.351 [2024-11-20 16:02:17.378394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.352 [2024-11-20 16:02:17.487506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.922 xnvme_bdev 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.922 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.182 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.182 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:20.182 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69906 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69906 ']' 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69906 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69906 00:14:20.183 killing process with pid 69906 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69906' 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69906 00:14:20.183 16:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69906 00:14:21.568 00:14:21.568 real 0m2.638s 00:14:21.568 user 0m2.745s 00:14:21.568 sys 0m0.358s 00:14:21.568 16:02:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.568 ************************************ 00:14:21.568 END TEST xnvme_rpc 00:14:21.568 16:02:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.568 ************************************ 00:14:21.829 16:02:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:21.829 16:02:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.829 16:02:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.829 16:02:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.829 ************************************ 00:14:21.829 START TEST xnvme_bdevperf 00:14:21.829 ************************************ 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.829 16:02:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.829 { 00:14:21.829 "subsystems": [ 00:14:21.829 { 00:14:21.829 "subsystem": "bdev", 00:14:21.829 "config": [ 00:14:21.829 { 00:14:21.829 "params": { 00:14:21.829 "io_mechanism": "io_uring", 00:14:21.829 "conserve_cpu": false, 00:14:21.829 "filename": "/dev/nvme0n1", 00:14:21.829 "name": "xnvme_bdev" 00:14:21.829 }, 00:14:21.829 "method": "bdev_xnvme_create" 00:14:21.829 }, 00:14:21.829 { 00:14:21.829 "method": "bdev_wait_for_examine" 00:14:21.829 } 00:14:21.829 ] 00:14:21.829 } 00:14:21.829 ] 00:14:21.829 } 00:14:21.829 [2024-11-20 16:02:19.933329] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:21.829 [2024-11-20 16:02:19.933450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69980 ] 00:14:22.090 [2024-11-20 16:02:20.095844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.090 [2024-11-20 16:02:20.200559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.352 Running I/O for 5 seconds... 00:14:24.238 36660.00 IOPS, 143.20 MiB/s [2024-11-20T16:02:23.904Z] 36810.00 IOPS, 143.79 MiB/s [2024-11-20T16:02:24.476Z] 36983.67 IOPS, 144.47 MiB/s [2024-11-20T16:02:25.855Z] 37405.50 IOPS, 146.12 MiB/s [2024-11-20T16:02:25.855Z] 36965.80 IOPS, 144.40 MiB/s 00:14:27.605 Latency(us) 00:14:27.605 [2024-11-20T16:02:25.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.605 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:27.605 xnvme_bdev : 5.00 36947.09 144.32 0.00 0.00 1727.37 53.56 132281.90 00:14:27.605 [2024-11-20T16:02:25.855Z] =================================================================================================================== 00:14:27.605 [2024-11-20T16:02:25.855Z] Total : 36947.09 144.32 0.00 0.00 1727.37 53.56 132281.90 00:14:28.175 16:02:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:28.175 16:02:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:28.175 16:02:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:28.175 16:02:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:28.175 16:02:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:28.175 { 00:14:28.175 "subsystems": [ 00:14:28.175 { 00:14:28.175 "subsystem": "bdev", 00:14:28.175 "config": [ 00:14:28.175 { 00:14:28.175 "params": { 00:14:28.175 "io_mechanism": "io_uring", 00:14:28.175 "conserve_cpu": false, 00:14:28.175 "filename": "/dev/nvme0n1", 00:14:28.175 "name": "xnvme_bdev" 00:14:28.175 }, 00:14:28.175 "method": "bdev_xnvme_create" 00:14:28.175 }, 00:14:28.175 { 00:14:28.175 "method": "bdev_wait_for_examine" 00:14:28.175 } 00:14:28.175 ] 00:14:28.175 } 00:14:28.175 ] 00:14:28.175 } 00:14:28.175 [2024-11-20 16:02:26.347379] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:28.175 [2024-11-20 16:02:26.347977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70055 ] 00:14:28.434 [2024-11-20 16:02:26.509586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.434 [2024-11-20 16:02:26.611113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.693 Running I/O for 5 seconds... 00:14:31.019 7604.00 IOPS, 29.70 MiB/s [2024-11-20T16:02:30.212Z] 7990.50 IOPS, 31.21 MiB/s [2024-11-20T16:02:31.154Z] 7727.33 IOPS, 30.18 MiB/s [2024-11-20T16:02:32.094Z] 7587.25 IOPS, 29.64 MiB/s [2024-11-20T16:02:32.094Z] 7507.40 IOPS, 29.33 MiB/s 00:14:33.844 Latency(us) 00:14:33.844 [2024-11-20T16:02:32.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.844 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:33.844 xnvme_bdev : 5.01 7503.49 29.31 0.00 0.00 8515.57 61.83 97598.23 00:14:33.844 [2024-11-20T16:02:32.094Z] =================================================================================================================== 00:14:33.844 [2024-11-20T16:02:32.094Z] Total : 7503.49 29.31 0.00 0.00 8515.57 61.83 97598.23 00:14:34.415 00:14:34.415 real 0m12.762s 00:14:34.415 user 0m5.965s 00:14:34.415 sys 0m6.543s 00:14:34.415 16:02:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.415 ************************************ 00:14:34.415 END TEST xnvme_bdevperf 00:14:34.415 ************************************ 00:14:34.415 16:02:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.676 16:02:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:34.676 16:02:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.676 16:02:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.676 16:02:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.676 ************************************ 00:14:34.676 START TEST xnvme_fio_plugin 00:14:34.676 ************************************ 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.676 16:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.676 { 00:14:34.676 "subsystems": [ 00:14:34.676 { 00:14:34.676 "subsystem": "bdev", 00:14:34.676 "config": [ 00:14:34.676 { 00:14:34.676 "params": { 00:14:34.676 "io_mechanism": "io_uring", 00:14:34.676 "conserve_cpu": false, 00:14:34.676 "filename": "/dev/nvme0n1", 00:14:34.676 "name": "xnvme_bdev" 00:14:34.676 }, 00:14:34.676 "method": "bdev_xnvme_create" 00:14:34.676 }, 00:14:34.676 { 00:14:34.676 "method": "bdev_wait_for_examine" 00:14:34.676 } 00:14:34.676 ] 00:14:34.676 } 00:14:34.676 ] 00:14:34.676 } 00:14:34.676 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.676 fio-3.35 00:14:34.676 Starting 1 thread 00:14:41.259 00:14:41.259 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70169: Wed Nov 20 16:02:38 2024 00:14:41.259 read: IOPS=35.9k, BW=140MiB/s (147MB/s)(701MiB/5001msec) 00:14:41.259 slat (usec): min=2, max=101, avg= 4.05, stdev= 2.42 00:14:41.259 clat (usec): min=796, max=4318, avg=1620.04, stdev=344.10 00:14:41.259 lat (usec): min=799, max=4321, avg=1624.08, stdev=344.55 00:14:41.259 clat percentiles (usec): 00:14:41.259 | 1.00th=[ 955], 5.00th=[ 1090], 10.00th=[ 1188], 20.00th=[ 1336], 00:14:41.259 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1696], 00:14:41.259 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2040], 95.00th=[ 2212], 00:14:41.259 | 99.00th=[ 2573], 99.50th=[ 2769], 99.90th=[ 3392], 99.95th=[ 3589], 00:14:41.259 | 99.99th=[ 3752] 00:14:41.259 bw ( KiB/s): min=139264, max=146432, per=99.66%, avg=143041.67, stdev=2520.63, samples=9 00:14:41.259 iops : min=34816, max=36608, avg=35760.33, stdev=630.12, samples=9 00:14:41.259 lat (usec) : 1000=1.92% 00:14:41.259 lat (msec) : 2=86.18%, 4=11.90%, 10=0.01% 00:14:41.259 cpu : usr=34.70%, sys=64.04%, ctx=11, majf=0, minf=762 00:14:41.259 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:41.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.259 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:41.259 issued rwts: total=179454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.259 00:14:41.259 Run status group 0 (all jobs): 00:14:41.259 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=701MiB (735MB), run=5001-5001msec 00:14:41.259 ----------------------------------------------------- 00:14:41.259 Suppressions used: 00:14:41.259 count bytes template 00:14:41.259 1 11 /usr/src/fio/parse.c 00:14:41.259 1 8 libtcmalloc_minimal.so 00:14:41.259 1 904 libcrypto.so 00:14:41.259 ----------------------------------------------------- 00:14:41.259 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.259 16:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.259 { 00:14:41.259 "subsystems": [ 00:14:41.259 { 00:14:41.259 "subsystem": "bdev", 00:14:41.259 "config": [ 00:14:41.259 { 00:14:41.259 "params": { 00:14:41.259 "io_mechanism": "io_uring", 00:14:41.259 "conserve_cpu": false, 00:14:41.259 "filename": "/dev/nvme0n1", 00:14:41.259 "name": "xnvme_bdev" 00:14:41.259 }, 00:14:41.259 "method": "bdev_xnvme_create" 00:14:41.259 }, 00:14:41.259 { 00:14:41.259 "method": "bdev_wait_for_examine" 00:14:41.259 } 00:14:41.259 ] 00:14:41.259 } 00:14:41.259 ] 00:14:41.260 } 00:14:41.520 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:41.520 fio-3.35 00:14:41.520 Starting 1 thread 00:14:48.102 00:14:48.102 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70260: Wed Nov 20 16:02:45 2024 00:14:48.102 write: IOPS=31.4k, BW=123MiB/s (129MB/s)(613MiB/5001msec); 0 zone resets 00:14:48.102 slat (nsec): min=2841, max=74246, avg=4267.18, stdev=2458.59 00:14:48.102 clat (usec): min=164, max=289977, avg=1867.15, stdev=7253.24 00:14:48.102 lat (usec): min=170, max=289980, avg=1871.42, stdev=7253.27 00:14:48.102 clat percentiles (usec): 00:14:48.102 | 1.00th=[ 955], 5.00th=[ 1090], 10.00th=[ 1188], 20.00th=[ 1319], 00:14:48.102 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:14:48.102 | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2180], 00:14:48.102 | 99.00th=[ 2573], 99.50th=[ 2868], 99.90th=[108528], 99.95th=[168821], 00:14:48.102 | 99.99th=[287310] 00:14:48.102 bw ( KiB/s): min=66216, max=154608, per=100.00%, avg=136018.67, stdev=26536.14, samples=9 00:14:48.102 iops : min=16554, max=38652, avg=34004.67, stdev=6634.04, samples=9 00:14:48.102 lat (usec) : 250=0.01%, 500=0.04%, 750=0.06%, 1000=1.84% 00:14:48.102 lat (msec) : 2=88.15%, 4=9.68%, 10=0.05%, 20=0.02%, 50=0.01% 00:14:48.102 lat (msec) : 100=0.04%, 250=0.09%, 500=0.04% 00:14:48.102 cpu : usr=32.74%, sys=66.14%, ctx=10, majf=0, minf=763 00:14:48.102 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:14:48.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.102 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:48.102 issued rwts: total=0,157027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:48.102 00:14:48.102 Run status group 0 (all jobs): 00:14:48.102 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=613MiB (643MB), run=5001-5001msec 00:14:48.102 ----------------------------------------------------- 00:14:48.102 Suppressions used: 00:14:48.102 count bytes template 00:14:48.102 1 11 /usr/src/fio/parse.c 00:14:48.102 1 8 libtcmalloc_minimal.so 00:14:48.102 1 904 libcrypto.so 00:14:48.102 ----------------------------------------------------- 00:14:48.102 00:14:48.102 00:14:48.102 real 0m13.541s 00:14:48.102 user 0m6.092s 00:14:48.102 sys 0m6.995s 00:14:48.102 16:02:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.102 16:02:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:48.102 ************************************ 00:14:48.102 END TEST xnvme_fio_plugin 00:14:48.102 ************************************ 00:14:48.102 16:02:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:48.102 16:02:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:48.102 16:02:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:48.102 16:02:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:48.102 16:02:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.102 16:02:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.102 16:02:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.102 ************************************ 00:14:48.102 START TEST xnvme_rpc 00:14:48.102 ************************************ 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70341 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70341 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70341 ']' 00:14:48.102 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.103 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.103 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.103 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.103 16:02:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.103 16:02:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.364 [2024-11-20 16:02:46.373045] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:48.364 [2024-11-20 16:02:46.373174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70341 ] 00:14:48.364 [2024-11-20 16:02:46.525804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.624 [2024-11-20 16:02:46.627276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 xnvme_bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70341 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70341 ']' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70341 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70341 00:14:49.231 killing process with pid 70341 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70341' 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70341 00:14:49.231 16:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70341 00:14:51.147 00:14:51.147 real 0m2.658s 00:14:51.147 user 0m2.756s 00:14:51.147 sys 0m0.369s 00:14:51.147 16:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.147 ************************************ 00:14:51.147 END TEST xnvme_rpc 00:14:51.147 ************************************ 00:14:51.147 16:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.147 16:02:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:51.147 16:02:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.147 16:02:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.147 16:02:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.147 ************************************ 00:14:51.147 START TEST xnvme_bdevperf 00:14:51.147 ************************************ 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.147 16:02:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.147 { 00:14:51.147 "subsystems": [ 00:14:51.147 { 00:14:51.147 "subsystem": "bdev", 00:14:51.147 "config": [ 00:14:51.147 { 00:14:51.147 "params": { 00:14:51.147 "io_mechanism": "io_uring", 00:14:51.147 "conserve_cpu": true, 00:14:51.147 "filename": "/dev/nvme0n1", 00:14:51.147 "name": "xnvme_bdev" 00:14:51.147 }, 00:14:51.147 "method": "bdev_xnvme_create" 00:14:51.147 }, 00:14:51.147 { 00:14:51.147 "method": "bdev_wait_for_examine" 00:14:51.147 } 00:14:51.147 ] 00:14:51.147 } 00:14:51.147 ] 00:14:51.147 } 00:14:51.147 [2024-11-20 16:02:49.077320] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:51.147 [2024-11-20 16:02:49.077440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70415 ] 00:14:51.147 [2024-11-20 16:02:49.229145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.147 [2024-11-20 16:02:49.332166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.409 Running I/O for 5 seconds... 00:14:53.736 41201.00 IOPS, 160.94 MiB/s [2024-11-20T16:02:52.930Z] 38574.50 IOPS, 150.68 MiB/s [2024-11-20T16:02:53.875Z] 38395.67 IOPS, 149.98 MiB/s [2024-11-20T16:02:54.832Z] 38530.25 IOPS, 150.51 MiB/s 00:14:56.582 Latency(us) 00:14:56.582 [2024-11-20T16:02:54.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.582 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:56.582 xnvme_bdev : 5.00 38701.58 151.18 0.00 0.00 1649.68 299.32 81062.99 00:14:56.582 [2024-11-20T16:02:54.832Z] =================================================================================================================== 00:14:56.582 [2024-11-20T16:02:54.832Z] Total : 38701.58 151.18 0.00 0.00 1649.68 299.32 81062.99 00:14:57.155 16:02:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.156 16:02:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.156 16:02:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.156 16:02:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.156 16:02:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 { 00:14:57.156 "subsystems": [ 00:14:57.156 { 00:14:57.156 "subsystem": "bdev", 00:14:57.156 "config": [ 00:14:57.156 { 00:14:57.156 "params": { 00:14:57.156 "io_mechanism": "io_uring", 00:14:57.156 "conserve_cpu": true, 00:14:57.156 "filename": "/dev/nvme0n1", 00:14:57.156 "name": "xnvme_bdev" 00:14:57.156 }, 00:14:57.156 "method": "bdev_xnvme_create" 00:14:57.156 }, 00:14:57.156 { 00:14:57.156 "method": "bdev_wait_for_examine" 00:14:57.156 } 00:14:57.156 ] 00:14:57.156 } 00:14:57.156 ] 00:14:57.156 } 00:14:57.156 [2024-11-20 16:02:55.373117] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:57.156 [2024-11-20 16:02:55.373243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:14:57.417 [2024-11-20 16:02:55.538031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.417 [2024-11-20 16:02:55.639345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.677 Running I/O for 5 seconds... 00:15:00.005 10262.00 IOPS, 40.09 MiB/s [2024-11-20T16:02:59.194Z] 10549.00 IOPS, 41.21 MiB/s [2024-11-20T16:03:00.134Z] 10459.33 IOPS, 40.86 MiB/s [2024-11-20T16:03:01.075Z] 10465.00 IOPS, 40.88 MiB/s [2024-11-20T16:03:01.075Z] 10787.80 IOPS, 42.14 MiB/s 00:15:02.825 Latency(us) 00:15:02.825 [2024-11-20T16:03:01.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.825 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:02.825 xnvme_bdev : 5.01 10785.74 42.13 0.00 0.00 5924.72 56.32 27625.94 00:15:02.825 [2024-11-20T16:03:01.075Z] =================================================================================================================== 00:15:02.825 [2024-11-20T16:03:01.075Z] Total : 10785.74 42.13 0.00 0.00 5924.72 56.32 27625.94 00:15:03.394 00:15:03.394 real 0m12.610s 00:15:03.394 user 0m9.574s 00:15:03.394 sys 0m2.162s 00:15:03.394 ************************************ 00:15:03.394 END TEST xnvme_bdevperf 00:15:03.394 16:03:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.394 16:03:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.394 ************************************ 00:15:03.654 16:03:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:03.654 16:03:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.654 16:03:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.654 16:03:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.654 ************************************ 00:15:03.654 START TEST xnvme_fio_plugin 00:15:03.654 ************************************ 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:03.654 16:03:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.654 { 00:15:03.654 "subsystems": [ 00:15:03.654 { 00:15:03.654 "subsystem": "bdev", 00:15:03.654 "config": [ 00:15:03.654 { 00:15:03.654 "params": { 00:15:03.654 "io_mechanism": "io_uring", 00:15:03.654 "conserve_cpu": true, 00:15:03.654 "filename": "/dev/nvme0n1", 00:15:03.654 "name": "xnvme_bdev" 00:15:03.654 }, 00:15:03.654 "method": "bdev_xnvme_create" 00:15:03.654 }, 00:15:03.654 { 00:15:03.654 "method": "bdev_wait_for_examine" 00:15:03.654 } 00:15:03.654 ] 00:15:03.654 } 00:15:03.654 ] 00:15:03.654 } 00:15:03.654 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:03.654 fio-3.35 00:15:03.654 Starting 1 thread 00:15:10.352 00:15:10.352 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70607: Wed Nov 20 16:03:07 2024 00:15:10.352 read: IOPS=39.3k, BW=154MiB/s (161MB/s)(769MiB/5001msec) 00:15:10.352 slat (nsec): min=2799, max=82063, avg=4275.46, stdev=2527.14 00:15:10.352 clat (usec): min=711, max=3270, avg=1458.72, stdev=303.29 00:15:10.352 lat (usec): min=714, max=3274, avg=1462.99, stdev=304.12 00:15:10.352 clat percentiles (usec): 00:15:10.352 | 1.00th=[ 898], 5.00th=[ 1029], 10.00th=[ 1106], 20.00th=[ 1205], 00:15:10.352 | 30.00th=[ 1287], 40.00th=[ 1352], 50.00th=[ 1418], 60.00th=[ 1500], 00:15:10.352 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1860], 95.00th=[ 2008], 00:15:10.352 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 2868], 99.95th=[ 3032], 00:15:10.352 | 99.99th=[ 3195] 00:15:10.352 bw ( KiB/s): min=141824, max=171520, per=99.90%, avg=157206.33, stdev=8758.41, samples=9 00:15:10.352 iops : min=35456, max=42880, avg=39301.56, stdev=2189.61, samples=9 00:15:10.352 lat (usec) : 750=0.02%, 1000=3.71% 00:15:10.352 lat (msec) : 2=91.10%, 4=5.17% 00:15:10.352 cpu : usr=63.86%, sys=32.92%, ctx=39, majf=0, minf=762 00:15:10.352 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:10.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.352 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:10.352 issued rwts: total=196736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.352 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.352 00:15:10.352 Run status group 0 (all jobs): 00:15:10.352 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=769MiB (806MB), run=5001-5001msec 00:15:10.352 ----------------------------------------------------- 00:15:10.352 Suppressions used: 00:15:10.352 count bytes template 00:15:10.352 1 11 /usr/src/fio/parse.c 00:15:10.352 1 8 libtcmalloc_minimal.so 00:15:10.352 1 904 libcrypto.so 00:15:10.352 ----------------------------------------------------- 00:15:10.352 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:10.352 16:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.352 { 00:15:10.352 "subsystems": [ 00:15:10.352 { 00:15:10.352 "subsystem": "bdev", 00:15:10.352 "config": [ 00:15:10.352 { 00:15:10.352 "params": { 00:15:10.352 "io_mechanism": "io_uring", 00:15:10.352 "conserve_cpu": true, 00:15:10.352 "filename": "/dev/nvme0n1", 00:15:10.352 "name": "xnvme_bdev" 00:15:10.352 }, 00:15:10.352 "method": "bdev_xnvme_create" 00:15:10.352 }, 00:15:10.352 { 00:15:10.352 "method": "bdev_wait_for_examine" 00:15:10.352 } 00:15:10.352 ] 00:15:10.352 } 00:15:10.352 ] 00:15:10.352 } 00:15:10.612 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:10.612 fio-3.35 00:15:10.612 Starting 1 thread 00:15:17.195 00:15:17.195 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70703: Wed Nov 20 16:03:14 2024 00:15:17.195 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5007msec); 0 zone resets 00:15:17.195 slat (usec): min=2, max=435, avg= 4.11, stdev= 2.78 00:15:17.195 clat (usec): min=58, max=14251, avg=1632.62, stdev=1178.31 00:15:17.195 lat (usec): min=61, max=14255, avg=1636.73, stdev=1178.42 00:15:17.195 clat percentiles (usec): 00:15:17.195 | 1.00th=[ 519], 5.00th=[ 914], 10.00th=[ 1057], 20.00th=[ 1205], 00:15:17.195 | 30.00th=[ 1287], 40.00th=[ 1352], 50.00th=[ 1434], 60.00th=[ 1516], 00:15:17.195 | 70.00th=[ 1614], 80.00th=[ 1729], 90.00th=[ 1942], 95.00th=[ 2212], 00:15:17.195 | 99.00th=[ 8455], 99.50th=[ 9503], 99.90th=[11469], 99.95th=[12125], 00:15:17.195 | 99.99th=[13173] 00:15:17.195 bw ( KiB/s): min=98024, max=164768, per=100.00%, avg=148924.33, stdev=19813.76, samples=9 00:15:17.195 iops : min=24506, max=41192, avg=37231.00, stdev=4953.46, samples=9 00:15:17.195 lat (usec) : 100=0.03%, 250=0.20%, 500=0.71%, 750=1.55%, 1000=5.06% 00:15:17.195 lat (msec) : 2=84.10%, 4=5.23%, 10=2.76%, 20=0.37% 00:15:17.195 cpu : usr=69.50%, sys=25.91%, ctx=39, majf=0, minf=763 00:15:17.195 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.1%, 16=22.9%, 32=54.2%, >=64=2.2% 00:15:17.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.195 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.2%, 32=0.2%, 64=1.4%, >=64=0.0% 00:15:17.195 issued rwts: total=0,179586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.195 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:17.195 00:15:17.195 Run status group 0 (all jobs): 00:15:17.195 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5007-5007msec 00:15:17.195 ----------------------------------------------------- 00:15:17.195 Suppressions used: 00:15:17.195 count bytes template 00:15:17.195 1 11 /usr/src/fio/parse.c 00:15:17.195 1 8 libtcmalloc_minimal.so 00:15:17.195 1 904 libcrypto.so 00:15:17.195 ----------------------------------------------------- 00:15:17.195 00:15:17.195 ************************************ 00:15:17.195 END TEST xnvme_fio_plugin 00:15:17.195 ************************************ 00:15:17.195 00:15:17.195 real 0m13.536s 00:15:17.195 user 0m9.367s 00:15:17.195 sys 0m3.439s 00:15:17.195 16:03:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.195 16:03:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:17.195 16:03:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:17.195 16:03:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:17.195 16:03:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.195 16:03:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.195 ************************************ 00:15:17.195 START TEST xnvme_rpc 00:15:17.195 ************************************ 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70784 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70784 00:15:17.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70784 ']' 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.195 16:03:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.195 [2024-11-20 16:03:15.368470] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:17.195 [2024-11-20 16:03:15.368741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:15:17.455 [2024-11-20 16:03:15.530682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.455 [2024-11-20 16:03:15.630961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.024 xnvme_bdev 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.024 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70784 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70784 ']' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70784 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70784 00:15:18.286 killing process with pid 70784 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70784' 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70784 00:15:18.286 16:03:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70784 00:15:19.734 ************************************ 00:15:19.734 END TEST xnvme_rpc 00:15:19.734 ************************************ 00:15:19.734 00:15:19.734 real 0m2.658s 00:15:19.734 user 0m2.759s 00:15:19.734 sys 0m0.354s 00:15:19.734 16:03:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.734 16:03:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.995 16:03:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:19.995 16:03:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:19.995 16:03:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.995 16:03:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 ************************************ 00:15:19.996 START TEST xnvme_bdevperf 00:15:19.996 ************************************ 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.996 16:03:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 { 00:15:19.996 "subsystems": [ 00:15:19.996 { 00:15:19.996 "subsystem": "bdev", 00:15:19.996 "config": [ 00:15:19.996 { 00:15:19.996 "params": { 00:15:19.996 "io_mechanism": "io_uring_cmd", 00:15:19.996 "conserve_cpu": false, 00:15:19.996 "filename": "/dev/ng0n1", 00:15:19.996 "name": "xnvme_bdev" 00:15:19.996 }, 00:15:19.996 "method": "bdev_xnvme_create" 00:15:19.996 }, 00:15:19.996 { 00:15:19.996 "method": "bdev_wait_for_examine" 00:15:19.996 } 00:15:19.996 ] 00:15:19.996 } 00:15:19.996 ] 00:15:19.996 } 00:15:19.996 [2024-11-20 16:03:18.073801] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:19.996 [2024-11-20 16:03:18.074057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70852 ] 00:15:19.996 [2024-11-20 16:03:18.235714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.256 [2024-11-20 16:03:18.388016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.516 Running I/O for 5 seconds... 00:15:22.395 38380.00 IOPS, 149.92 MiB/s [2024-11-20T16:03:22.061Z] 38063.50 IOPS, 148.69 MiB/s [2024-11-20T16:03:23.001Z] 37950.00 IOPS, 148.24 MiB/s [2024-11-20T16:03:23.942Z] 38795.50 IOPS, 151.54 MiB/s [2024-11-20T16:03:23.942Z] 39266.60 IOPS, 153.39 MiB/s 00:15:25.692 Latency(us) 00:15:25.692 [2024-11-20T16:03:23.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.692 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:25.692 xnvme_bdev : 5.01 39238.44 153.28 0.00 0.00 1627.19 335.56 13409.67 00:15:25.692 [2024-11-20T16:03:23.942Z] =================================================================================================================== 00:15:25.692 [2024-11-20T16:03:23.942Z] Total : 39238.44 153.28 0.00 0.00 1627.19 335.56 13409.67 00:15:26.264 16:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:26.264 16:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:26.264 16:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:26.264 16:03:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:26.264 16:03:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.264 { 00:15:26.264 "subsystems": [ 00:15:26.264 { 00:15:26.264 "subsystem": "bdev", 00:15:26.264 "config": [ 00:15:26.264 { 00:15:26.264 "params": { 00:15:26.264 "io_mechanism": "io_uring_cmd", 00:15:26.264 "conserve_cpu": false, 00:15:26.264 "filename": "/dev/ng0n1", 00:15:26.264 "name": "xnvme_bdev" 00:15:26.264 }, 00:15:26.264 "method": "bdev_xnvme_create" 00:15:26.264 }, 00:15:26.264 { 00:15:26.264 "method": "bdev_wait_for_examine" 00:15:26.264 } 00:15:26.264 ] 00:15:26.264 } 00:15:26.264 ] 00:15:26.264 } 00:15:26.264 [2024-11-20 16:03:24.437512] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:26.264 [2024-11-20 16:03:24.437630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:15:26.525 [2024-11-20 16:03:24.596670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.525 [2024-11-20 16:03:24.698843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.786 Running I/O for 5 seconds... 00:15:29.111 38765.00 IOPS, 151.43 MiB/s [2024-11-20T16:03:28.302Z] 39282.00 IOPS, 153.45 MiB/s [2024-11-20T16:03:29.247Z] 39724.00 IOPS, 155.17 MiB/s [2024-11-20T16:03:30.310Z] 39554.75 IOPS, 154.51 MiB/s [2024-11-20T16:03:30.310Z] 36162.20 IOPS, 141.26 MiB/s 00:15:32.060 Latency(us) 00:15:32.060 [2024-11-20T16:03:30.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.060 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:32.060 xnvme_bdev : 5.00 36136.39 141.16 0.00 0.00 1766.10 71.68 16131.94 00:15:32.060 [2024-11-20T16:03:30.310Z] =================================================================================================================== 00:15:32.060 [2024-11-20T16:03:30.310Z] Total : 36136.39 141.16 0.00 0.00 1766.10 71.68 16131.94 00:15:32.632 16:03:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:32.632 16:03:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:32.632 16:03:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:32.633 16:03:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:32.633 16:03:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:32.633 { 00:15:32.633 "subsystems": [ 00:15:32.633 { 00:15:32.633 "subsystem": "bdev", 00:15:32.633 "config": [ 00:15:32.633 { 00:15:32.633 "params": { 00:15:32.633 "io_mechanism": "io_uring_cmd", 00:15:32.633 "conserve_cpu": false, 00:15:32.633 "filename": "/dev/ng0n1", 00:15:32.633 "name": "xnvme_bdev" 00:15:32.633 }, 00:15:32.633 "method": "bdev_xnvme_create" 00:15:32.633 }, 00:15:32.633 { 00:15:32.633 "method": "bdev_wait_for_examine" 00:15:32.633 } 00:15:32.633 ] 00:15:32.633 } 00:15:32.633 ] 00:15:32.633 } 00:15:32.633 [2024-11-20 16:03:30.736852] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:32.633 [2024-11-20 16:03:30.736970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71005 ] 00:15:32.894 [2024-11-20 16:03:30.889216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.894 [2024-11-20 16:03:30.992209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.156 Running I/O for 5 seconds... 00:15:35.041 60672.00 IOPS, 237.00 MiB/s [2024-11-20T16:03:34.686Z] 59296.00 IOPS, 231.62 MiB/s [2024-11-20T16:03:35.258Z] 58581.33 IOPS, 228.83 MiB/s [2024-11-20T16:03:36.643Z] 58944.00 IOPS, 230.25 MiB/s 00:15:38.393 Latency(us) 00:15:38.393 [2024-11-20T16:03:36.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.393 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:38.393 xnvme_bdev : 5.00 58163.62 227.20 0.00 0.00 1096.69 554.54 3453.24 00:15:38.393 [2024-11-20T16:03:36.643Z] =================================================================================================================== 00:15:38.393 [2024-11-20T16:03:36.643Z] Total : 58163.62 227.20 0.00 0.00 1096.69 554.54 3453.24 00:15:38.967 16:03:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:38.967 16:03:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:38.967 16:03:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:38.967 16:03:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:38.967 16:03:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:38.967 { 00:15:38.967 "subsystems": [ 00:15:38.967 { 00:15:38.967 "subsystem": "bdev", 00:15:38.967 "config": [ 00:15:38.967 { 00:15:38.967 "params": { 00:15:38.967 "io_mechanism": "io_uring_cmd", 00:15:38.967 "conserve_cpu": false, 00:15:38.967 "filename": "/dev/ng0n1", 00:15:38.967 "name": "xnvme_bdev" 00:15:38.967 }, 00:15:38.967 "method": "bdev_xnvme_create" 00:15:38.967 }, 00:15:38.967 { 00:15:38.967 "method": "bdev_wait_for_examine" 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 } 00:15:38.967 ] 00:15:38.967 } 00:15:38.967 [2024-11-20 16:03:37.036150] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:38.967 [2024-11-20 16:03:37.036269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:15:38.967 [2024-11-20 16:03:37.197365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.229 [2024-11-20 16:03:37.300258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.489 Running I/O for 5 seconds... 00:15:41.371 4854.00 IOPS, 18.96 MiB/s [2024-11-20T16:03:40.560Z] 3311.00 IOPS, 12.93 MiB/s [2024-11-20T16:03:41.941Z] 2279.33 IOPS, 8.90 MiB/s [2024-11-20T16:03:42.883Z] 1765.75 IOPS, 6.90 MiB/s [2024-11-20T16:03:42.883Z] 1506.20 IOPS, 5.88 MiB/s 00:15:44.633 Latency(us) 00:15:44.633 [2024-11-20T16:03:42.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.633 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:44.633 xnvme_bdev : 5.32 1428.74 5.58 0.00 0.00 43400.23 109.49 725937.23 00:15:44.633 [2024-11-20T16:03:42.883Z] =================================================================================================================== 00:15:44.633 [2024-11-20T16:03:42.883Z] Total : 1428.74 5.58 0.00 0.00 43400.23 109.49 725937.23 00:15:45.646 00:15:45.646 real 0m25.591s 00:15:45.646 user 0m14.211s 00:15:45.646 sys 0m10.865s 00:15:45.646 16:03:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.646 ************************************ 00:15:45.646 END TEST xnvme_bdevperf 00:15:45.646 ************************************ 00:15:45.646 16:03:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:45.646 16:03:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:45.646 16:03:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.646 16:03:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.646 16:03:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.646 ************************************ 00:15:45.646 START TEST xnvme_fio_plugin 00:15:45.646 ************************************ 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.646 16:03:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:45.647 16:03:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.647 { 00:15:45.647 "subsystems": [ 00:15:45.647 { 00:15:45.647 "subsystem": "bdev", 00:15:45.647 "config": [ 00:15:45.647 { 00:15:45.647 "params": { 00:15:45.647 "io_mechanism": "io_uring_cmd", 00:15:45.647 "conserve_cpu": false, 00:15:45.647 "filename": "/dev/ng0n1", 00:15:45.647 "name": "xnvme_bdev" 00:15:45.647 }, 00:15:45.647 "method": "bdev_xnvme_create" 00:15:45.647 }, 00:15:45.647 { 00:15:45.647 "method": "bdev_wait_for_examine" 00:15:45.647 } 00:15:45.647 ] 00:15:45.647 } 00:15:45.647 ] 00:15:45.647 } 00:15:45.647 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:45.647 fio-3.35 00:15:45.647 Starting 1 thread 00:15:52.231 00:15:52.231 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71194: Wed Nov 20 16:03:49 2024 00:15:52.231 read: IOPS=41.2k, BW=161MiB/s (169MB/s)(806MiB/5001msec) 00:15:52.231 slat (usec): min=2, max=114, avg= 4.23, stdev= 2.78 00:15:52.231 clat (usec): min=502, max=3938, avg=1383.65, stdev=429.76 00:15:52.231 lat (usec): min=505, max=3943, avg=1387.88, stdev=430.49 00:15:52.231 clat percentiles (usec): 00:15:52.231 | 1.00th=[ 685], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 914], 00:15:52.231 | 30.00th=[ 1090], 40.00th=[ 1270], 50.00th=[ 1418], 60.00th=[ 1532], 00:15:52.231 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 1926], 95.00th=[ 2073], 00:15:52.231 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2900], 99.95th=[ 3195], 00:15:52.231 | 99.99th=[ 3851] 00:15:52.231 bw ( KiB/s): min=133632, max=218624, per=98.85%, avg=163068.78, stdev=29366.25, samples=9 00:15:52.231 iops : min=33408, max=54656, avg=40767.11, stdev=7341.63, samples=9 00:15:52.231 lat (usec) : 750=5.14%, 1000=20.26% 00:15:52.231 lat (msec) : 2=67.43%, 4=7.17% 00:15:52.231 cpu : usr=40.48%, sys=58.38%, ctx=10, majf=0, minf=762 00:15:52.231 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:52.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.231 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:52.231 issued rwts: total=206250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:52.231 00:15:52.231 Run status group 0 (all jobs): 00:15:52.231 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=806MiB (845MB), run=5001-5001msec 00:15:52.491 ----------------------------------------------------- 00:15:52.491 Suppressions used: 00:15:52.491 count bytes template 00:15:52.491 1 11 /usr/src/fio/parse.c 00:15:52.491 1 8 libtcmalloc_minimal.so 00:15:52.491 1 904 libcrypto.so 00:15:52.491 ----------------------------------------------------- 00:15:52.491 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:52.491 16:03:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:52.491 { 00:15:52.491 "subsystems": [ 00:15:52.491 { 00:15:52.491 "subsystem": "bdev", 00:15:52.491 "config": [ 00:15:52.491 { 00:15:52.491 "params": { 00:15:52.491 "io_mechanism": "io_uring_cmd", 00:15:52.491 "conserve_cpu": false, 00:15:52.491 "filename": "/dev/ng0n1", 00:15:52.491 "name": "xnvme_bdev" 00:15:52.491 }, 00:15:52.491 "method": "bdev_xnvme_create" 00:15:52.491 }, 00:15:52.491 { 00:15:52.491 "method": "bdev_wait_for_examine" 00:15:52.491 } 00:15:52.491 ] 00:15:52.491 } 00:15:52.491 ] 00:15:52.491 } 00:15:52.491 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:52.491 fio-3.35 00:15:52.491 Starting 1 thread 00:15:59.049 00:15:59.049 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71285: Wed Nov 20 16:03:56 2024 00:15:59.049 write: IOPS=48.3k, BW=189MiB/s (198MB/s)(944MiB/5001msec); 0 zone resets 00:15:59.049 slat (nsec): min=2144, max=82218, avg=4294.35, stdev=2239.63 00:15:59.049 clat (usec): min=81, max=127807, avg=1158.51, stdev=2408.20 00:15:59.049 lat (usec): min=84, max=127812, avg=1162.81, stdev=2408.33 00:15:59.049 clat percentiles (usec): 00:15:59.049 | 1.00th=[ 660], 5.00th=[ 701], 10.00th=[ 734], 20.00th=[ 791], 00:15:59.049 | 30.00th=[ 840], 40.00th=[ 898], 50.00th=[ 963], 60.00th=[ 1074], 00:15:59.049 | 70.00th=[ 1221], 80.00th=[ 1434], 90.00th=[ 1663], 95.00th=[ 1827], 00:15:59.049 | 99.00th=[ 2147], 99.50th=[ 2311], 99.90th=[ 8979], 99.95th=[ 67634], 00:15:59.049 | 99.99th=[127402] 00:15:59.049 bw ( KiB/s): min=132854, max=244736, per=100.00%, avg=196347.33, stdev=48306.38, samples=9 00:15:59.049 iops : min=33213, max=61184, avg=49086.78, stdev=12076.68, samples=9 00:15:59.049 lat (usec) : 100=0.01%, 250=0.01%, 500=0.02%, 750=13.10%, 1000=40.15% 00:15:59.049 lat (msec) : 2=44.62%, 4=2.00%, 10=0.02%, 20=0.01%, 50=0.03% 00:15:59.049 lat (msec) : 100=0.03%, 250=0.03% 00:15:59.049 cpu : usr=42.58%, sys=56.54%, ctx=9, majf=0, minf=763 00:15:59.049 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:15:59.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.049 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:59.049 issued rwts: total=0,241718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:59.049 00:15:59.049 Run status group 0 (all jobs): 00:15:59.049 WRITE: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=944MiB (990MB), run=5001-5001msec 00:15:59.308 ----------------------------------------------------- 00:15:59.308 Suppressions used: 00:15:59.308 count bytes template 00:15:59.308 1 11 /usr/src/fio/parse.c 00:15:59.308 1 8 libtcmalloc_minimal.so 00:15:59.308 1 904 libcrypto.so 00:15:59.308 ----------------------------------------------------- 00:15:59.308 00:15:59.308 00:15:59.308 real 0m13.695s 00:15:59.308 user 0m7.019s 00:15:59.308 sys 0m6.248s 00:15:59.308 ************************************ 00:15:59.308 END TEST xnvme_fio_plugin 00:15:59.308 ************************************ 00:15:59.308 16:03:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.308 16:03:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:59.308 16:03:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:59.308 16:03:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:59.308 16:03:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:59.308 16:03:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:59.308 16:03:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.308 16:03:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.308 16:03:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.308 ************************************ 00:15:59.308 START TEST xnvme_rpc 00:15:59.308 ************************************ 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:59.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71370 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71370 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71370 ']' 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.308 16:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:59.308 [2024-11-20 16:03:57.498832] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:59.308 [2024-11-20 16:03:57.498957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71370 ] 00:15:59.567 [2024-11-20 16:03:57.662778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.567 [2024-11-20 16:03:57.764987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.135 xnvme_bdev 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.135 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71370 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71370 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71370 00:16:00.395 killing process with pid 71370 00:16:00.395 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.396 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.396 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71370' 00:16:00.396 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71370 00:16:00.396 16:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71370 00:16:02.294 ************************************ 00:16:02.294 END TEST xnvme_rpc 00:16:02.294 ************************************ 00:16:02.294 00:16:02.294 real 0m2.654s 00:16:02.294 user 0m2.751s 00:16:02.294 sys 0m0.374s 00:16:02.294 16:04:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.294 16:04:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.294 16:04:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:02.294 16:04:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.294 16:04:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.294 16:04:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.294 ************************************ 00:16:02.294 START TEST xnvme_bdevperf 00:16:02.294 ************************************ 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:02.294 16:04:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:02.294 { 00:16:02.294 "subsystems": [ 00:16:02.294 { 00:16:02.294 "subsystem": "bdev", 00:16:02.294 "config": [ 00:16:02.294 { 00:16:02.294 "params": { 00:16:02.294 "io_mechanism": "io_uring_cmd", 00:16:02.294 "conserve_cpu": true, 00:16:02.294 "filename": "/dev/ng0n1", 00:16:02.294 "name": "xnvme_bdev" 00:16:02.294 }, 00:16:02.294 "method": "bdev_xnvme_create" 00:16:02.294 }, 00:16:02.294 { 00:16:02.294 "method": "bdev_wait_for_examine" 00:16:02.294 } 00:16:02.294 ] 00:16:02.294 } 00:16:02.294 ] 00:16:02.294 } 00:16:02.294 [2024-11-20 16:04:00.189872] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:02.294 [2024-11-20 16:04:00.189995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:16:02.294 [2024-11-20 16:04:00.351986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.294 [2024-11-20 16:04:00.451587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.552 Running I/O for 5 seconds... 00:16:04.854 58137.00 IOPS, 227.10 MiB/s [2024-11-20T16:04:04.036Z] 59644.50 IOPS, 232.99 MiB/s [2024-11-20T16:04:04.966Z] 59978.67 IOPS, 234.29 MiB/s [2024-11-20T16:04:05.896Z] 59472.75 IOPS, 232.32 MiB/s [2024-11-20T16:04:05.896Z] 59645.20 IOPS, 232.99 MiB/s 00:16:07.646 Latency(us) 00:16:07.646 [2024-11-20T16:04:05.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.646 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:07.646 xnvme_bdev : 5.00 59606.17 232.84 0.00 0.00 1069.32 345.01 13409.67 00:16:07.646 [2024-11-20T16:04:05.896Z] =================================================================================================================== 00:16:07.646 [2024-11-20T16:04:05.896Z] Total : 59606.17 232.84 0.00 0.00 1069.32 345.01 13409.67 00:16:08.210 16:04:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:08.210 16:04:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:08.210 16:04:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:08.210 16:04:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:08.210 16:04:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.467 { 00:16:08.467 "subsystems": [ 00:16:08.467 { 00:16:08.467 "subsystem": "bdev", 00:16:08.467 "config": [ 00:16:08.467 { 00:16:08.467 "params": { 00:16:08.467 "io_mechanism": "io_uring_cmd", 00:16:08.467 "conserve_cpu": true, 00:16:08.467 "filename": "/dev/ng0n1", 00:16:08.467 "name": "xnvme_bdev" 00:16:08.467 }, 00:16:08.467 "method": "bdev_xnvme_create" 00:16:08.467 }, 00:16:08.467 { 00:16:08.467 "method": "bdev_wait_for_examine" 00:16:08.467 } 00:16:08.467 ] 00:16:08.467 } 00:16:08.467 ] 00:16:08.467 } 00:16:08.467 [2024-11-20 16:04:06.510967] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:08.467 [2024-11-20 16:04:06.511086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71513 ] 00:16:08.467 [2024-11-20 16:04:06.671191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.724 [2024-11-20 16:04:06.770574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.981 Running I/O for 5 seconds... 00:16:10.845 31882.00 IOPS, 124.54 MiB/s [2024-11-20T16:04:10.036Z] 36507.00 IOPS, 142.61 MiB/s [2024-11-20T16:04:11.409Z] 38515.33 IOPS, 150.45 MiB/s [2024-11-20T16:04:12.394Z] 39722.75 IOPS, 155.17 MiB/s 00:16:14.144 Latency(us) 00:16:14.144 [2024-11-20T16:04:12.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.144 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:14.144 xnvme_bdev : 5.00 41010.18 160.20 0.00 0.00 1554.85 51.99 184710.70 00:16:14.144 [2024-11-20T16:04:12.394Z] =================================================================================================================== 00:16:14.144 [2024-11-20T16:04:12.394Z] Total : 41010.18 160.20 0.00 0.00 1554.85 51.99 184710.70 00:16:14.709 16:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:14.709 16:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:14.709 16:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:14.709 16:04:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:14.709 16:04:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:14.709 { 00:16:14.709 "subsystems": [ 00:16:14.709 { 00:16:14.709 "subsystem": "bdev", 00:16:14.709 "config": [ 00:16:14.709 { 00:16:14.709 "params": { 00:16:14.709 "io_mechanism": "io_uring_cmd", 00:16:14.709 "conserve_cpu": true, 00:16:14.709 "filename": "/dev/ng0n1", 00:16:14.709 "name": "xnvme_bdev" 00:16:14.709 }, 00:16:14.709 "method": "bdev_xnvme_create" 00:16:14.709 }, 00:16:14.709 { 00:16:14.709 "method": "bdev_wait_for_examine" 00:16:14.709 } 00:16:14.709 ] 00:16:14.709 } 00:16:14.709 ] 00:16:14.709 } 00:16:14.709 [2024-11-20 16:04:12.830404] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:14.709 [2024-11-20 16:04:12.830790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71587 ] 00:16:14.967 [2024-11-20 16:04:13.009556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.967 [2024-11-20 16:04:13.108214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.224 Running I/O for 5 seconds... 00:16:17.541 96704.00 IOPS, 377.75 MiB/s [2024-11-20T16:04:16.356Z] 96064.00 IOPS, 375.25 MiB/s [2024-11-20T16:04:17.727Z] 94485.33 IOPS, 369.08 MiB/s [2024-11-20T16:04:18.660Z] 93024.00 IOPS, 363.38 MiB/s 00:16:20.410 Latency(us) 00:16:20.410 [2024-11-20T16:04:18.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.410 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:20.410 xnvme_bdev : 5.00 91430.44 357.15 0.00 0.00 696.41 374.94 5747.00 00:16:20.410 [2024-11-20T16:04:18.660Z] =================================================================================================================== 00:16:20.410 [2024-11-20T16:04:18.660Z] Total : 91430.44 357.15 0.00 0.00 696.41 374.94 5747.00 00:16:20.975 16:04:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:20.975 16:04:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:20.975 16:04:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:20.975 16:04:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:20.975 16:04:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:20.975 { 00:16:20.975 "subsystems": [ 00:16:20.975 { 00:16:20.975 "subsystem": "bdev", 00:16:20.975 "config": [ 00:16:20.975 { 00:16:20.975 "params": { 00:16:20.975 "io_mechanism": "io_uring_cmd", 00:16:20.975 "conserve_cpu": true, 00:16:20.975 "filename": "/dev/ng0n1", 00:16:20.975 "name": "xnvme_bdev" 00:16:20.975 }, 00:16:20.975 "method": "bdev_xnvme_create" 00:16:20.975 }, 00:16:20.975 { 00:16:20.975 "method": "bdev_wait_for_examine" 00:16:20.975 } 00:16:20.975 ] 00:16:20.975 } 00:16:20.975 ] 00:16:20.975 } 00:16:20.975 [2024-11-20 16:04:19.144248] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:20.975 [2024-11-20 16:04:19.144504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71661 ] 00:16:21.233 [2024-11-20 16:04:19.302239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.233 [2024-11-20 16:04:19.406918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.491 Running I/O for 5 seconds... 00:16:23.796 27247.00 IOPS, 106.43 MiB/s [2024-11-20T16:04:22.977Z] 39146.00 IOPS, 152.91 MiB/s [2024-11-20T16:04:23.908Z] 44768.33 IOPS, 174.88 MiB/s [2024-11-20T16:04:24.838Z] 48664.75 IOPS, 190.10 MiB/s [2024-11-20T16:04:24.838Z] 51797.40 IOPS, 202.33 MiB/s 00:16:26.588 Latency(us) 00:16:26.588 [2024-11-20T16:04:24.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.588 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:26.588 xnvme_bdev : 5.00 51782.38 202.27 0.00 0.00 1230.31 56.71 71383.83 00:16:26.588 [2024-11-20T16:04:24.838Z] =================================================================================================================== 00:16:26.588 [2024-11-20T16:04:24.838Z] Total : 51782.38 202.27 0.00 0.00 1230.31 56.71 71383.83 00:16:27.520 ************************************ 00:16:27.520 END TEST xnvme_bdevperf 00:16:27.520 ************************************ 00:16:27.520 00:16:27.520 real 0m25.282s 00:16:27.520 user 0m14.670s 00:16:27.520 sys 0m8.222s 00:16:27.520 16:04:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.520 16:04:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:27.520 ************************************ 00:16:27.520 START TEST xnvme_fio_plugin 00:16:27.520 ************************************ 00:16:27.520 16:04:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:27.520 16:04:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.520 16:04:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.520 16:04:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:27.520 16:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.520 { 00:16:27.520 "subsystems": [ 00:16:27.520 { 00:16:27.520 "subsystem": "bdev", 00:16:27.520 "config": [ 00:16:27.520 { 00:16:27.520 "params": { 00:16:27.520 "io_mechanism": "io_uring_cmd", 00:16:27.520 "conserve_cpu": true, 00:16:27.520 "filename": "/dev/ng0n1", 00:16:27.520 "name": "xnvme_bdev" 00:16:27.520 }, 00:16:27.520 "method": "bdev_xnvme_create" 00:16:27.520 }, 00:16:27.520 { 00:16:27.521 "method": "bdev_wait_for_examine" 00:16:27.521 } 00:16:27.521 ] 00:16:27.521 } 00:16:27.521 ] 00:16:27.521 } 00:16:27.521 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:27.521 fio-3.35 00:16:27.521 Starting 1 thread 00:16:34.074 00:16:34.074 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71774: Wed Nov 20 16:04:31 2024 00:16:34.074 read: IOPS=61.0k, BW=238MiB/s (250MB/s)(1192MiB/5001msec) 00:16:34.074 slat (nsec): min=2791, max=78521, avg=3882.75, stdev=1916.60 00:16:34.074 clat (usec): min=217, max=7762, avg=898.17, stdev=225.17 00:16:34.074 lat (usec): min=220, max=7766, avg=902.05, stdev=225.91 00:16:34.074 clat percentiles (usec): 00:16:34.074 | 1.00th=[ 635], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 742], 00:16:34.074 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 889], 00:16:34.074 | 70.00th=[ 955], 80.00th=[ 1037], 90.00th=[ 1139], 95.00th=[ 1254], 00:16:34.074 | 99.00th=[ 1549], 99.50th=[ 1713], 99.90th=[ 3294], 99.95th=[ 4113], 00:16:34.074 | 99.99th=[ 4883] 00:16:34.074 bw ( KiB/s): min=209920, max=260584, per=100.00%, avg=244086.11, stdev=15636.62, samples=9 00:16:34.074 iops : min=52480, max=65146, avg=61021.44, stdev=3909.20, samples=9 00:16:34.074 lat (usec) : 250=0.01%, 500=0.03%, 750=22.86%, 1000=53.09% 00:16:34.074 lat (msec) : 2=23.75%, 4=0.21%, 10=0.06% 00:16:34.074 cpu : usr=46.52%, sys=51.04%, ctx=14, majf=0, minf=762 00:16:34.074 IO depths : 1=1.4%, 2=3.0%, 4=6.3%, 8=12.5%, 16=25.1%, 32=50.2%, >=64=1.6% 00:16:34.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.074 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:34.074 issued rwts: total=305031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.074 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.074 00:16:34.074 Run status group 0 (all jobs): 00:16:34.074 READ: bw=238MiB/s (250MB/s), 238MiB/s-238MiB/s (250MB/s-250MB/s), io=1192MiB (1249MB), run=5001-5001msec 00:16:34.074 ----------------------------------------------------- 00:16:34.074 Suppressions used: 00:16:34.074 count bytes template 00:16:34.074 1 11 /usr/src/fio/parse.c 00:16:34.074 1 8 libtcmalloc_minimal.so 00:16:34.074 1 904 libcrypto.so 00:16:34.074 ----------------------------------------------------- 00:16:34.074 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.074 16:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.074 { 00:16:34.074 "subsystems": [ 00:16:34.074 { 00:16:34.074 "subsystem": "bdev", 00:16:34.074 "config": [ 00:16:34.074 { 00:16:34.074 "params": { 00:16:34.074 "io_mechanism": "io_uring_cmd", 00:16:34.074 "conserve_cpu": true, 00:16:34.074 "filename": "/dev/ng0n1", 00:16:34.074 "name": "xnvme_bdev" 00:16:34.074 }, 00:16:34.074 "method": "bdev_xnvme_create" 00:16:34.074 }, 00:16:34.074 { 00:16:34.074 "method": "bdev_wait_for_examine" 00:16:34.074 } 00:16:34.074 ] 00:16:34.074 } 00:16:34.074 ] 00:16:34.074 } 00:16:34.332 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:34.332 fio-3.35 00:16:34.332 Starting 1 thread 00:16:40.942 00:16:40.942 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71865: Wed Nov 20 16:04:38 2024 00:16:40.942 write: IOPS=54.9k, BW=214MiB/s (225MB/s)(1072MiB/5001msec); 0 zone resets 00:16:40.942 slat (usec): min=2, max=155, avg= 4.37, stdev= 2.28 00:16:40.942 clat (usec): min=58, max=126341, avg=996.20, stdev=1989.32 00:16:40.942 lat (usec): min=62, max=126344, avg=1000.58, stdev=1989.41 00:16:40.942 clat percentiles (usec): 00:16:40.942 | 1.00th=[ 652], 5.00th=[ 693], 10.00th=[ 725], 20.00th=[ 775], 00:16:40.942 | 30.00th=[ 816], 40.00th=[ 865], 50.00th=[ 906], 60.00th=[ 963], 00:16:40.942 | 70.00th=[ 1029], 80.00th=[ 1106], 90.00th=[ 1237], 95.00th=[ 1352], 00:16:40.942 | 99.00th=[ 1598], 99.50th=[ 1762], 99.90th=[ 11207], 99.95th=[ 23725], 00:16:40.942 | 99.99th=[125305] 00:16:40.942 bw ( KiB/s): min=178176, max=228136, per=99.64%, avg=218683.11, stdev=15712.40, samples=9 00:16:40.942 iops : min=44544, max=57034, avg=54670.78, stdev=3928.10, samples=9 00:16:40.942 lat (usec) : 100=0.01%, 250=0.03%, 500=0.07%, 750=15.16%, 1000=50.02% 00:16:40.942 lat (msec) : 2=34.48%, 4=0.10%, 10=0.02%, 20=0.05%, 50=0.05% 00:16:40.942 lat (msec) : 250=0.02% 00:16:40.942 cpu : usr=49.96%, sys=47.40%, ctx=15, majf=0, minf=763 00:16:40.942 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:16:40.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.942 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:40.942 issued rwts: total=0,274389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.942 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.942 00:16:40.942 Run status group 0 (all jobs): 00:16:40.942 WRITE: bw=214MiB/s (225MB/s), 214MiB/s-214MiB/s (225MB/s-225MB/s), io=1072MiB (1124MB), run=5001-5001msec 00:16:40.942 ----------------------------------------------------- 00:16:40.942 Suppressions used: 00:16:40.942 count bytes template 00:16:40.942 1 11 /usr/src/fio/parse.c 00:16:40.942 1 8 libtcmalloc_minimal.so 00:16:40.942 1 904 libcrypto.so 00:16:40.942 ----------------------------------------------------- 00:16:40.942 00:16:40.942 00:16:40.942 real 0m13.580s 00:16:40.942 user 0m7.561s 00:16:40.942 sys 0m5.425s 00:16:40.942 16:04:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.942 ************************************ 00:16:40.942 END TEST xnvme_fio_plugin 00:16:40.942 16:04:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:40.942 ************************************ 00:16:40.942 16:04:39 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71370 00:16:40.942 16:04:39 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:16:40.942 Process with pid 71370 is not found 00:16:40.942 16:04:39 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71370 00:16:40.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71370) - No such process 00:16:40.942 16:04:39 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71370 is not found' 00:16:40.942 ************************************ 00:16:40.942 END TEST nvme_xnvme 00:16:40.942 ************************************ 00:16:40.942 00:16:40.942 real 3m26.371s 00:16:40.942 user 1m59.297s 00:16:40.942 sys 1m10.552s 00:16:40.942 16:04:39 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.942 16:04:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.942 16:04:39 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:40.942 16:04:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.942 16:04:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.942 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.942 ************************************ 00:16:40.942 START TEST blockdev_xnvme 00:16:40.942 ************************************ 00:16:40.942 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:40.942 * Looking for test storage... 00:16:40.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:40.942 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.942 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.942 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.200 16:04:39 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.200 --rc genhtml_branch_coverage=1 00:16:41.200 --rc genhtml_function_coverage=1 00:16:41.200 --rc genhtml_legend=1 00:16:41.200 --rc geninfo_all_blocks=1 00:16:41.200 --rc geninfo_unexecuted_blocks=1 00:16:41.200 00:16:41.200 ' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.200 --rc genhtml_branch_coverage=1 00:16:41.200 --rc genhtml_function_coverage=1 00:16:41.200 --rc genhtml_legend=1 00:16:41.200 --rc geninfo_all_blocks=1 00:16:41.200 --rc geninfo_unexecuted_blocks=1 00:16:41.200 00:16:41.200 ' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.200 --rc genhtml_branch_coverage=1 00:16:41.200 --rc genhtml_function_coverage=1 00:16:41.200 --rc genhtml_legend=1 00:16:41.200 --rc geninfo_all_blocks=1 00:16:41.200 --rc geninfo_unexecuted_blocks=1 00:16:41.200 00:16:41.200 ' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.200 --rc genhtml_branch_coverage=1 00:16:41.200 --rc genhtml_function_coverage=1 00:16:41.200 --rc genhtml_legend=1 00:16:41.200 --rc geninfo_all_blocks=1 00:16:41.200 --rc geninfo_unexecuted_blocks=1 00:16:41.200 00:16:41.200 ' 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72000 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72000 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72000 ']' 00:16:41.200 16:04:39 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.200 16:04:39 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.201 16:04:39 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.201 16:04:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.201 [2024-11-20 16:04:39.287511] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:41.201 [2024-11-20 16:04:39.287795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72000 ] 00:16:41.201 [2024-11-20 16:04:39.441265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.458 [2024-11-20 16:04:39.541665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.024 16:04:40 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.024 16:04:40 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:42.024 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:42.024 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:42.024 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:42.024 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:42.024 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.847 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:42.847 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:42.847 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:42.847 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:42.847 16:04:40 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:42.847 nvme0n1 00:16:42.847 nvme0n2 00:16:42.847 nvme0n3 00:16:42.847 nvme1n1 00:16:42.847 nvme2n1 00:16:42.847 nvme3n1 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.847 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a6815b01-d5dd-405f-987e-f9a546abee23"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6815b01-d5dd-405f-987e-f9a546abee23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "0fec1770-5909-4959-af63-163e9c6b8e4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0fec1770-5909-4959-af63-163e9c6b8e4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "055ddc91-5e1c-414a-b92f-b43174c1d580"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "055ddc91-5e1c-414a-b92f-b43174c1d580",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cfe739ce-d701-4cc9-8c35-e141380eba5f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cfe739ce-d701-4cc9-8c35-e141380eba5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "20c562f8-d00b-4b8e-8822-331e6d2e3cf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "20c562f8-d00b-4b8e-8822-331e6d2e3cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0ec2bfbd-4fa3-4a81-84ec-5be72327975b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0ec2bfbd-4fa3-4a81-84ec-5be72327975b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:43.106 16:04:41 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72000 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72000 ']' 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72000 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:43.106 16:04:41 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72000 00:16:43.107 killing process with pid 72000 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72000' 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72000 00:16:43.107 16:04:41 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72000 00:16:45.007 16:04:42 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:45.007 16:04:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:45.007 16:04:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:45.007 16:04:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.007 16:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.007 ************************************ 00:16:45.007 START TEST bdev_hello_world 00:16:45.007 ************************************ 00:16:45.007 16:04:42 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:45.007 [2024-11-20 16:04:42.798688] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:45.007 [2024-11-20 16:04:42.798833] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72273 ] 00:16:45.007 [2024-11-20 16:04:42.953368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.007 [2024-11-20 16:04:43.053484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.264 [2024-11-20 16:04:43.386377] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:45.264 [2024-11-20 16:04:43.386571] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:45.264 [2024-11-20 16:04:43.386593] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:45.264 [2024-11-20 16:04:43.388463] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:45.264 [2024-11-20 16:04:43.388673] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:45.264 [2024-11-20 16:04:43.388694] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:45.264 [2024-11-20 16:04:43.388832] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:45.264 00:16:45.264 [2024-11-20 16:04:43.388850] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:46.199 00:16:46.199 real 0m1.370s 00:16:46.199 user 0m1.095s 00:16:46.199 sys 0m0.163s 00:16:46.199 16:04:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.199 ************************************ 00:16:46.199 END TEST bdev_hello_world 00:16:46.199 ************************************ 00:16:46.199 16:04:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:46.199 16:04:44 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:46.199 16:04:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.199 16:04:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.199 16:04:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.199 ************************************ 00:16:46.199 START TEST bdev_bounds 00:16:46.199 ************************************ 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:46.199 Process bdevio pid: 72314 00:16:46.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72314 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72314' 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72314 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72314 ']' 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.199 16:04:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:46.199 [2024-11-20 16:04:44.211092] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:46.199 [2024-11-20 16:04:44.211378] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72314 ] 00:16:46.199 [2024-11-20 16:04:44.369669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.456 [2024-11-20 16:04:44.475753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.456 [2024-11-20 16:04:44.475768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.456 [2024-11-20 16:04:44.475775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.021 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.021 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:47.021 16:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:47.021 I/O targets: 00:16:47.021 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:47.021 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:47.021 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:47.021 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:47.021 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:47.021 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:47.021 00:16:47.021 00:16:47.021 CUnit - A unit testing framework for C - Version 2.1-3 00:16:47.021 http://cunit.sourceforge.net/ 00:16:47.021 00:16:47.021 00:16:47.021 Suite: bdevio tests on: nvme3n1 00:16:47.021 Test: blockdev write read block ...passed 00:16:47.021 Test: blockdev write zeroes read block ...passed 00:16:47.021 Test: blockdev write zeroes read no split ...passed 00:16:47.021 Test: blockdev write zeroes read split ...passed 00:16:47.021 Test: blockdev write zeroes read split partial ...passed 00:16:47.021 Test: blockdev reset ...passed 00:16:47.021 Test: blockdev write read 8 blocks ...passed 00:16:47.021 Test: blockdev write read size > 128k ...passed 00:16:47.021 Test: blockdev write read invalid size ...passed 00:16:47.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.021 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.021 Test: blockdev write read max offset ...passed 00:16:47.021 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.021 Test: blockdev writev readv 8 blocks ...passed 00:16:47.021 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.021 Test: blockdev writev readv block ...passed 00:16:47.021 Test: blockdev writev readv size > 128k ...passed 00:16:47.021 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.021 Test: blockdev comparev and writev ...passed 00:16:47.021 Test: blockdev nvme passthru rw ...passed 00:16:47.021 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.021 Test: blockdev nvme admin passthru ...passed 00:16:47.021 Test: blockdev copy ...passed 00:16:47.021 Suite: bdevio tests on: nvme2n1 00:16:47.021 Test: blockdev write read block ...passed 00:16:47.021 Test: blockdev write zeroes read block ...passed 00:16:47.021 Test: blockdev write zeroes read no split ...passed 00:16:47.021 Test: blockdev write zeroes read split ...passed 00:16:47.021 Test: blockdev write zeroes read split partial ...passed 00:16:47.021 Test: blockdev reset ...passed 00:16:47.021 Test: blockdev write read 8 blocks ...passed 00:16:47.021 Test: blockdev write read size > 128k ...passed 00:16:47.021 Test: blockdev write read invalid size ...passed 00:16:47.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.021 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.021 Test: blockdev write read max offset ...passed 00:16:47.021 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.021 Test: blockdev writev readv 8 blocks ...passed 00:16:47.021 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.021 Test: blockdev writev readv block ...passed 00:16:47.021 Test: blockdev writev readv size > 128k ...passed 00:16:47.021 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.021 Test: blockdev comparev and writev ...passed 00:16:47.021 Test: blockdev nvme passthru rw ...passed 00:16:47.021 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.021 Test: blockdev nvme admin passthru ...passed 00:16:47.021 Test: blockdev copy ...passed 00:16:47.021 Suite: bdevio tests on: nvme1n1 00:16:47.021 Test: blockdev write read block ...passed 00:16:47.021 Test: blockdev write zeroes read block ...passed 00:16:47.021 Test: blockdev write zeroes read no split ...passed 00:16:47.279 Test: blockdev write zeroes read split ...passed 00:16:47.279 Test: blockdev write zeroes read split partial ...passed 00:16:47.279 Test: blockdev reset ...passed 00:16:47.279 Test: blockdev write read 8 blocks ...passed 00:16:47.279 Test: blockdev write read size > 128k ...passed 00:16:47.279 Test: blockdev write read invalid size ...passed 00:16:47.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.279 Test: blockdev write read max offset ...passed 00:16:47.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.279 Test: blockdev writev readv 8 blocks ...passed 00:16:47.279 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.279 Test: blockdev writev readv block ...passed 00:16:47.279 Test: blockdev writev readv size > 128k ...passed 00:16:47.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.279 Test: blockdev comparev and writev ...passed 00:16:47.279 Test: blockdev nvme passthru rw ...passed 00:16:47.279 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.279 Test: blockdev nvme admin passthru ...passed 00:16:47.279 Test: blockdev copy ...passed 00:16:47.279 Suite: bdevio tests on: nvme0n3 00:16:47.279 Test: blockdev write read block ...passed 00:16:47.279 Test: blockdev write zeroes read block ...passed 00:16:47.279 Test: blockdev write zeroes read no split ...passed 00:16:47.279 Test: blockdev write zeroes read split ...passed 00:16:47.279 Test: blockdev write zeroes read split partial ...passed 00:16:47.279 Test: blockdev reset ...passed 00:16:47.279 Test: blockdev write read 8 blocks ...passed 00:16:47.279 Test: blockdev write read size > 128k ...passed 00:16:47.279 Test: blockdev write read invalid size ...passed 00:16:47.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.279 Test: blockdev write read max offset ...passed 00:16:47.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.279 Test: blockdev writev readv 8 blocks ...passed 00:16:47.279 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.279 Test: blockdev writev readv block ...passed 00:16:47.279 Test: blockdev writev readv size > 128k ...passed 00:16:47.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.279 Test: blockdev comparev and writev ...passed 00:16:47.279 Test: blockdev nvme passthru rw ...passed 00:16:47.279 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.279 Test: blockdev nvme admin passthru ...passed 00:16:47.279 Test: blockdev copy ...passed 00:16:47.279 Suite: bdevio tests on: nvme0n2 00:16:47.279 Test: blockdev write read block ...passed 00:16:47.279 Test: blockdev write zeroes read block ...passed 00:16:47.279 Test: blockdev write zeroes read no split ...passed 00:16:47.279 Test: blockdev write zeroes read split ...passed 00:16:47.279 Test: blockdev write zeroes read split partial ...passed 00:16:47.279 Test: blockdev reset ...passed 00:16:47.279 Test: blockdev write read 8 blocks ...passed 00:16:47.279 Test: blockdev write read size > 128k ...passed 00:16:47.279 Test: blockdev write read invalid size ...passed 00:16:47.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.279 Test: blockdev write read max offset ...passed 00:16:47.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.279 Test: blockdev writev readv 8 blocks ...passed 00:16:47.279 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.279 Test: blockdev writev readv block ...passed 00:16:47.279 Test: blockdev writev readv size > 128k ...passed 00:16:47.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.279 Test: blockdev comparev and writev ...passed 00:16:47.279 Test: blockdev nvme passthru rw ...passed 00:16:47.279 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.279 Test: blockdev nvme admin passthru ...passed 00:16:47.279 Test: blockdev copy ...passed 00:16:47.279 Suite: bdevio tests on: nvme0n1 00:16:47.280 Test: blockdev write read block ...passed 00:16:47.280 Test: blockdev write zeroes read block ...passed 00:16:47.280 Test: blockdev write zeroes read no split ...passed 00:16:47.280 Test: blockdev write zeroes read split ...passed 00:16:47.280 Test: blockdev write zeroes read split partial ...passed 00:16:47.280 Test: blockdev reset ...passed 00:16:47.280 Test: blockdev write read 8 blocks ...passed 00:16:47.280 Test: blockdev write read size > 128k ...passed 00:16:47.280 Test: blockdev write read invalid size ...passed 00:16:47.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.280 Test: blockdev write read max offset ...passed 00:16:47.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.280 Test: blockdev writev readv 8 blocks ...passed 00:16:47.280 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.280 Test: blockdev writev readv block ...passed 00:16:47.280 Test: blockdev writev readv size > 128k ...passed 00:16:47.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.280 Test: blockdev comparev and writev ...passed 00:16:47.280 Test: blockdev nvme passthru rw ...passed 00:16:47.280 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.280 Test: blockdev nvme admin passthru ...passed 00:16:47.280 Test: blockdev copy ...passed 00:16:47.280 00:16:47.280 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.280 suites 6 6 n/a 0 0 00:16:47.280 tests 138 138 138 0 0 00:16:47.280 asserts 780 780 780 0 n/a 00:16:47.280 00:16:47.280 Elapsed time = 0.909 seconds 00:16:47.280 0 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72314 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72314 ']' 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72314 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72314 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.280 killing process with pid 72314 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72314' 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72314 00:16:47.280 16:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72314 00:16:48.221 16:04:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:48.221 00:16:48.221 real 0m2.084s 00:16:48.221 user 0m5.249s 00:16:48.221 sys 0m0.269s 00:16:48.221 ************************************ 00:16:48.221 END TEST bdev_bounds 00:16:48.221 ************************************ 00:16:48.221 16:04:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.221 16:04:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:48.221 16:04:46 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:48.221 16:04:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:48.221 16:04:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.221 16:04:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.221 ************************************ 00:16:48.221 START TEST bdev_nbd 00:16:48.221 ************************************ 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:48.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72368 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72368 /var/tmp/spdk-nbd.sock 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72368 ']' 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:48.221 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.222 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:48.222 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.222 16:04:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:48.222 [2024-11-20 16:04:46.339469] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:48.222 [2024-11-20 16:04:46.339579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.480 [2024-11-20 16:04:46.505173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.480 [2024-11-20 16:04:46.606690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:49.044 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.301 1+0 records in 00:16:49.301 1+0 records out 00:16:49.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425606 s, 9.6 MB/s 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:49.301 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.559 1+0 records in 00:16:49.559 1+0 records out 00:16:49.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437636 s, 9.4 MB/s 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:49.559 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.815 1+0 records in 00:16:49.815 1+0 records out 00:16:49.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434387 s, 9.4 MB/s 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:49.815 16:04:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.072 1+0 records in 00:16:50.072 1+0 records out 00:16:50.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653123 s, 6.3 MB/s 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:50.072 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.329 1+0 records in 00:16:50.329 1+0 records out 00:16:50.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595388 s, 6.9 MB/s 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.329 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.586 1+0 records in 00:16:50.586 1+0 records out 00:16:50.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353807 s, 11.6 MB/s 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.586 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:50.586 { 00:16:50.586 "nbd_device": "/dev/nbd0", 00:16:50.586 "bdev_name": "nvme0n1" 00:16:50.586 }, 00:16:50.586 { 00:16:50.586 "nbd_device": "/dev/nbd1", 00:16:50.586 "bdev_name": "nvme0n2" 00:16:50.586 }, 00:16:50.586 { 00:16:50.586 "nbd_device": "/dev/nbd2", 00:16:50.586 "bdev_name": "nvme0n3" 00:16:50.586 }, 00:16:50.586 { 00:16:50.586 "nbd_device": "/dev/nbd3", 00:16:50.586 "bdev_name": "nvme1n1" 00:16:50.586 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd4", 00:16:50.587 "bdev_name": "nvme2n1" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd5", 00:16:50.587 "bdev_name": "nvme3n1" 00:16:50.587 } 00:16:50.587 ]' 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd0", 00:16:50.587 "bdev_name": "nvme0n1" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd1", 00:16:50.587 "bdev_name": "nvme0n2" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd2", 00:16:50.587 "bdev_name": "nvme0n3" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd3", 00:16:50.587 "bdev_name": "nvme1n1" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd4", 00:16:50.587 "bdev_name": "nvme2n1" 00:16:50.587 }, 00:16:50.587 { 00:16:50.587 "nbd_device": "/dev/nbd5", 00:16:50.587 "bdev_name": "nvme3n1" 00:16:50.587 } 00:16:50.587 ]' 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.587 16:04:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.843 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.101 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.417 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.676 16:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.944 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:52.202 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:52.461 /dev/nbd0 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.461 1+0 records in 00:16:52.461 1+0 records out 00:16:52.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321856 s, 12.7 MB/s 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:52.461 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:52.719 /dev/nbd1 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.719 1+0 records in 00:16:52.719 1+0 records out 00:16:52.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323604 s, 12.7 MB/s 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:52.719 16:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:52.977 /dev/nbd10 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.977 1+0 records in 00:16:52.977 1+0 records out 00:16:52.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525818 s, 7.8 MB/s 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:52.977 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:53.235 /dev/nbd11 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.235 1+0 records in 00:16:53.235 1+0 records out 00:16:53.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510411 s, 8.0 MB/s 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:53.235 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:53.235 /dev/nbd12 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.494 1+0 records in 00:16:53.494 1+0 records out 00:16:53.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563176 s, 7.3 MB/s 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:53.494 /dev/nbd13 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.494 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.753 1+0 records in 00:16:53.753 1+0 records out 00:16:53.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530204 s, 7.7 MB/s 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd0", 00:16:53.753 "bdev_name": "nvme0n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd1", 00:16:53.753 "bdev_name": "nvme0n2" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd10", 00:16:53.753 "bdev_name": "nvme0n3" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd11", 00:16:53.753 "bdev_name": "nvme1n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd12", 00:16:53.753 "bdev_name": "nvme2n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd13", 00:16:53.753 "bdev_name": "nvme3n1" 00:16:53.753 } 00:16:53.753 ]' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd0", 00:16:53.753 "bdev_name": "nvme0n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd1", 00:16:53.753 "bdev_name": "nvme0n2" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd10", 00:16:53.753 "bdev_name": "nvme0n3" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd11", 00:16:53.753 "bdev_name": "nvme1n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd12", 00:16:53.753 "bdev_name": "nvme2n1" 00:16:53.753 }, 00:16:53.753 { 00:16:53.753 "nbd_device": "/dev/nbd13", 00:16:53.753 "bdev_name": "nvme3n1" 00:16:53.753 } 00:16:53.753 ]' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:53.753 /dev/nbd1 00:16:53.753 /dev/nbd10 00:16:53.753 /dev/nbd11 00:16:53.753 /dev/nbd12 00:16:53.753 /dev/nbd13' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:53.753 /dev/nbd1 00:16:53.753 /dev/nbd10 00:16:53.753 /dev/nbd11 00:16:53.753 /dev/nbd12 00:16:53.753 /dev/nbd13' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:53.753 16:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:53.753 256+0 records in 00:16:53.753 256+0 records out 00:16:53.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041875 s, 250 MB/s 00:16:53.753 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:54.012 256+0 records in 00:16:54.012 256+0 records out 00:16:54.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613998 s, 17.1 MB/s 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:54.012 256+0 records in 00:16:54.012 256+0 records out 00:16:54.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0648669 s, 16.2 MB/s 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:54.012 256+0 records in 00:16:54.012 256+0 records out 00:16:54.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0658875 s, 15.9 MB/s 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.012 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:54.270 256+0 records in 00:16:54.270 256+0 records out 00:16:54.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0735199 s, 14.3 MB/s 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:54.270 256+0 records in 00:16:54.270 256+0 records out 00:16:54.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0683555 s, 15.3 MB/s 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:54.270 256+0 records in 00:16:54.270 256+0 records out 00:16:54.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613277 s, 17.1 MB/s 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:54.270 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.271 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.545 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.804 16:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:55.062 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:55.062 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.063 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.321 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.578 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:55.579 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:55.579 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.579 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:55.579 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:55.579 16:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:55.836 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:55.837 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:56.095 malloc_lvol_verify 00:16:56.095 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:56.352 89aff3b5-36b5-43e3-9c8a-c2e6af65abb3 00:16:56.352 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:56.609 5fcb4193-4a0a-446b-9010-a01de20c8e1c 00:16:56.609 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:56.866 /dev/nbd0 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:56.866 mke2fs 1.47.0 (5-Feb-2023) 00:16:56.866 Discarding device blocks: 0/4096 done 00:16:56.866 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:56.866 00:16:56.866 Allocating group tables: 0/1 done 00:16:56.866 Writing inode tables: 0/1 done 00:16:56.866 Creating journal (1024 blocks): done 00:16:56.866 Writing superblocks and filesystem accounting information: 0/1 done 00:16:56.866 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.866 16:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.866 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.124 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72368 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72368 ']' 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72368 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72368 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.125 killing process with pid 72368 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72368' 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72368 00:16:57.125 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72368 00:16:57.691 16:04:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:57.691 00:16:57.691 real 0m9.623s 00:16:57.691 user 0m13.630s 00:16:57.691 sys 0m3.259s 00:16:57.691 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.691 16:04:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:57.691 ************************************ 00:16:57.691 END TEST bdev_nbd 00:16:57.691 ************************************ 00:16:57.691 16:04:55 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:57.691 16:04:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:57.691 16:04:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:57.691 16:04:55 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:57.691 16:04:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.691 16:04:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.691 16:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.691 ************************************ 00:16:57.691 START TEST bdev_fio 00:16:57.691 ************************************ 00:16:57.691 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:57.691 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:57.691 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:57.691 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:57.691 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:57.949 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:57.950 ************************************ 00:16:57.950 START TEST bdev_fio_rw_verify 00:16:57.950 ************************************ 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.950 16:04:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:57.950 16:04:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.950 16:04:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.950 16:04:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:57.950 16:04:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.950 16:04:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:57.950 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:57.950 fio-3.35 00:16:57.950 Starting 6 threads 00:17:10.143 00:17:10.143 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72762: Wed Nov 20 16:05:06 2024 00:17:10.143 read: IOPS=40.5k, BW=158MiB/s (166MB/s)(1582MiB/10001msec) 00:17:10.143 slat (usec): min=2, max=1320, avg= 4.63, stdev= 4.02 00:17:10.143 clat (usec): min=67, max=134321, avg=377.84, stdev=564.28 00:17:10.143 lat (usec): min=70, max=134324, avg=382.47, stdev=564.41 00:17:10.143 clat percentiles (usec): 00:17:10.143 | 50.000th=[ 347], 99.000th=[ 947], 99.900th=[ 1450], 00:17:10.143 | 99.990th=[ 3621], 99.999th=[108528] 00:17:10.143 write: IOPS=40.9k, BW=160MiB/s (168MB/s)(1599MiB/10001msec); 0 zone resets 00:17:10.143 slat (usec): min=3, max=2579, avg=23.47, stdev=35.47 00:17:10.143 clat (usec): min=58, max=171809, avg=584.25, stdev=2700.18 00:17:10.143 lat (usec): min=73, max=171829, avg=607.71, stdev=2700.48 00:17:10.143 clat percentiles (usec): 00:17:10.143 | 50.000th=[ 490], 99.000th=[ 1254], 99.900th=[ 12518], 00:17:10.143 | 99.990th=[143655], 99.999th=[170918] 00:17:10.143 bw ( KiB/s): min=115288, max=195443, per=99.83%, avg=163473.47, stdev=3309.40, samples=114 00:17:10.143 iops : min=28822, max=48860, avg=40868.05, stdev=827.33, samples=114 00:17:10.143 lat (usec) : 100=0.13%, 250=18.84%, 500=46.00%, 750=25.78%, 1000=7.19% 00:17:10.143 lat (msec) : 2=1.94%, 4=0.05%, 10=0.02%, 20=0.03%, 50=0.01% 00:17:10.143 lat (msec) : 100=0.01%, 250=0.02% 00:17:10.143 cpu : usr=50.70%, sys=31.84%, ctx=9606, majf=0, minf=32666 00:17:10.143 IO depths : 1=11.4%, 2=23.7%, 4=51.3%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.143 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.143 issued rwts: total=405038,409429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:10.143 00:17:10.143 Run status group 0 (all jobs): 00:17:10.143 READ: bw=158MiB/s (166MB/s), 158MiB/s-158MiB/s (166MB/s-166MB/s), io=1582MiB (1659MB), run=10001-10001msec 00:17:10.143 WRITE: bw=160MiB/s (168MB/s), 160MiB/s-160MiB/s (168MB/s-168MB/s), io=1599MiB (1677MB), run=10001-10001msec 00:17:10.143 ----------------------------------------------------- 00:17:10.143 Suppressions used: 00:17:10.143 count bytes template 00:17:10.143 6 48 /usr/src/fio/parse.c 00:17:10.143 4089 392544 /usr/src/fio/iolog.c 00:17:10.143 1 8 libtcmalloc_minimal.so 00:17:10.143 1 904 libcrypto.so 00:17:10.143 ----------------------------------------------------- 00:17:10.143 00:17:10.143 00:17:10.143 real 0m11.914s 00:17:10.143 user 0m31.912s 00:17:10.143 sys 0m19.382s 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:10.143 ************************************ 00:17:10.143 END TEST bdev_fio_rw_verify 00:17:10.143 ************************************ 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:10.143 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a6815b01-d5dd-405f-987e-f9a546abee23"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6815b01-d5dd-405f-987e-f9a546abee23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "0fec1770-5909-4959-af63-163e9c6b8e4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0fec1770-5909-4959-af63-163e9c6b8e4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "055ddc91-5e1c-414a-b92f-b43174c1d580"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "055ddc91-5e1c-414a-b92f-b43174c1d580",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cfe739ce-d701-4cc9-8c35-e141380eba5f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cfe739ce-d701-4cc9-8c35-e141380eba5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "20c562f8-d00b-4b8e-8822-331e6d2e3cf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "20c562f8-d00b-4b8e-8822-331e6d2e3cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0ec2bfbd-4fa3-4a81-84ec-5be72327975b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0ec2bfbd-4fa3-4a81-84ec-5be72327975b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:10.144 /home/vagrant/spdk_repo/spdk 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:10.144 00:17:10.144 real 0m12.052s 00:17:10.144 user 0m31.974s 00:17:10.144 sys 0m19.453s 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.144 16:05:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:10.144 ************************************ 00:17:10.144 END TEST bdev_fio 00:17:10.144 ************************************ 00:17:10.144 16:05:08 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:10.144 16:05:08 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:10.144 16:05:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:10.144 16:05:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.144 16:05:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:10.144 ************************************ 00:17:10.144 START TEST bdev_verify 00:17:10.144 ************************************ 00:17:10.144 16:05:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:10.144 [2024-11-20 16:05:08.090660] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:10.144 [2024-11-20 16:05:08.090788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:17:10.144 [2024-11-20 16:05:08.251664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.144 [2024-11-20 16:05:08.353041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.144 [2024-11-20 16:05:08.353220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.710 Running I/O for 5 seconds... 00:17:13.016 24768.00 IOPS, 96.75 MiB/s [2024-11-20T16:05:12.201Z] 23520.00 IOPS, 91.88 MiB/s [2024-11-20T16:05:13.134Z] 24170.67 IOPS, 94.42 MiB/s [2024-11-20T16:05:14.070Z] 24008.00 IOPS, 93.78 MiB/s 00:17:15.820 Latency(us) 00:17:15.820 [2024-11-20T16:05:14.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.820 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0x80000 00:17:15.820 nvme0n1 : 5.06 1746.56 6.82 0.00 0.00 73139.11 8922.98 76626.71 00:17:15.820 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x80000 length 0x80000 00:17:15.820 nvme0n1 : 5.06 1668.64 6.52 0.00 0.00 76559.66 11191.53 72190.42 00:17:15.820 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0x80000 00:17:15.820 nvme0n2 : 5.06 1745.91 6.82 0.00 0.00 73026.53 12401.43 66947.54 00:17:15.820 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x80000 length 0x80000 00:17:15.820 nvme0n2 : 5.06 1670.89 6.53 0.00 0.00 76291.31 14014.62 74206.92 00:17:15.820 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0x80000 00:17:15.820 nvme0n3 : 5.06 1745.34 6.82 0.00 0.00 72905.93 17241.01 58074.98 00:17:15.820 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x80000 length 0x80000 00:17:15.820 nvme0n3 : 5.06 1668.14 6.52 0.00 0.00 76255.47 13510.50 70173.93 00:17:15.820 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0xbd0bd 00:17:15.820 nvme1n1 : 5.07 3170.96 12.39 0.00 0.00 39949.47 3327.21 56058.49 00:17:15.820 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:15.820 nvme1n1 : 5.05 3001.16 11.72 0.00 0.00 42213.08 2923.91 72997.02 00:17:15.820 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0x20000 00:17:15.820 nvme2n1 : 5.05 1747.19 6.82 0.00 0.00 72458.43 8620.50 72997.02 00:17:15.820 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x20000 length 0x20000 00:17:15.820 nvme2n1 : 5.07 1692.45 6.61 0.00 0.00 74758.19 8166.79 68157.44 00:17:15.820 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0x0 length 0xa0000 00:17:15.820 nvme3n1 : 5.07 1766.35 6.90 0.00 0.00 71514.22 2079.51 70577.23 00:17:15.820 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.820 Verification LBA range: start 0xa0000 length 0xa0000 00:17:15.821 nvme3n1 : 5.07 1690.86 6.60 0.00 0.00 74660.37 3037.34 71383.83 00:17:15.821 [2024-11-20T16:05:14.071Z] =================================================================================================================== 00:17:15.821 [2024-11-20T16:05:14.071Z] Total : 23314.46 91.07 0.00 0.00 65368.27 2079.51 76626.71 00:17:16.391 00:17:16.391 real 0m6.588s 00:17:16.391 user 0m10.621s 00:17:16.391 sys 0m1.578s 00:17:16.391 16:05:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.391 16:05:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:16.391 ************************************ 00:17:16.391 END TEST bdev_verify 00:17:16.391 ************************************ 00:17:16.661 16:05:14 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:16.661 16:05:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:16.661 16:05:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.661 16:05:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.661 ************************************ 00:17:16.661 START TEST bdev_verify_big_io 00:17:16.661 ************************************ 00:17:16.661 16:05:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:16.661 [2024-11-20 16:05:14.754968] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:16.661 [2024-11-20 16:05:14.755171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73042 ] 00:17:16.922 [2024-11-20 16:05:14.927111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.922 [2024-11-20 16:05:15.049818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.922 [2024-11-20 16:05:15.049833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.492 Running I/O for 5 seconds... 00:17:23.067 1584.00 IOPS, 99.00 MiB/s [2024-11-20T16:05:21.888Z] 2208.00 IOPS, 138.00 MiB/s [2024-11-20T16:05:22.459Z] 2722.67 IOPS, 170.17 MiB/s 00:17:24.209 Latency(us) 00:17:24.209 [2024-11-20T16:05:22.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.209 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0x8000 00:17:24.209 nvme0n1 : 5.77 85.99 5.37 0.00 0.00 1436558.88 121796.14 2374621.34 00:17:24.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x8000 length 0x8000 00:17:24.209 nvme0n1 : 5.68 91.54 5.72 0.00 0.00 1253498.15 229073.53 1806777.11 00:17:24.209 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0x8000 00:17:24.209 nvme0n2 : 5.77 133.09 8.32 0.00 0.00 905671.60 5343.70 1032444.06 00:17:24.209 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x8000 length 0x8000 00:17:24.209 nvme0n2 : 5.96 101.96 6.37 0.00 0.00 1107708.31 75416.81 1413157.81 00:17:24.209 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0x8000 00:17:24.209 nvme0n3 : 5.86 106.40 6.65 0.00 0.00 1104077.37 101227.91 2051982.57 00:17:24.209 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x8000 length 0x8000 00:17:24.209 nvme0n3 : 6.10 110.23 6.89 0.00 0.00 967960.29 5016.02 1038896.84 00:17:24.209 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0xbd0b 00:17:24.209 nvme1n1 : 5.77 127.49 7.97 0.00 0.00 886551.12 34078.72 1348630.06 00:17:24.209 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:24.209 nvme1n1 : 6.23 184.77 11.55 0.00 0.00 554307.06 5797.42 764653.88 00:17:24.209 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0x2000 00:17:24.209 nvme2n1 : 5.87 128.12 8.01 0.00 0.00 865311.98 27625.94 1729343.80 00:17:24.209 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x2000 length 0x2000 00:17:24.209 nvme2n1 : 6.42 204.51 12.78 0.00 0.00 476739.22 1449.35 1077613.49 00:17:24.209 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0x0 length 0xa000 00:17:24.209 nvme3n1 : 5.87 153.90 9.62 0.00 0.00 701611.97 1562.78 1058255.16 00:17:24.209 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.209 Verification LBA range: start 0xa000 length 0xa000 00:17:24.209 nvme3n1 : 6.65 269.62 16.85 0.00 0.00 351609.58 283.57 2710165.66 00:17:24.209 [2024-11-20T16:05:22.459Z] =================================================================================================================== 00:17:24.209 [2024-11-20T16:05:22.459Z] Total : 1697.62 106.10 0.00 0.00 764745.52 283.57 2710165.66 00:17:25.152 00:17:25.152 real 0m8.464s 00:17:25.152 user 0m15.664s 00:17:25.152 sys 0m0.424s 00:17:25.152 16:05:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.152 16:05:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.152 ************************************ 00:17:25.152 END TEST bdev_verify_big_io 00:17:25.152 ************************************ 00:17:25.152 16:05:23 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.152 16:05:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:25.152 16:05:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.152 16:05:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.152 ************************************ 00:17:25.152 START TEST bdev_write_zeroes 00:17:25.152 ************************************ 00:17:25.152 16:05:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.152 [2024-11-20 16:05:23.275594] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:25.152 [2024-11-20 16:05:23.275714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73161 ] 00:17:25.413 [2024-11-20 16:05:23.430680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.413 [2024-11-20 16:05:23.530984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.675 Running I/O for 1 seconds... 00:17:26.948 79392.00 IOPS, 310.12 MiB/s 00:17:26.948 Latency(us) 00:17:26.948 [2024-11-20T16:05:25.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.948 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme0n1 : 1.02 12837.62 50.15 0.00 0.00 9960.91 4965.61 25306.98 00:17:26.948 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme0n2 : 1.01 12871.30 50.28 0.00 0.00 9927.52 4562.31 23088.84 00:17:26.948 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme0n3 : 1.02 12821.29 50.08 0.00 0.00 9958.45 4990.82 21979.77 00:17:26.948 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme1n1 : 1.02 14776.54 57.72 0.00 0.00 8633.71 3528.86 18148.43 00:17:26.948 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme2n1 : 1.02 12932.45 50.52 0.00 0.00 9816.71 4285.05 24197.91 00:17:26.948 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.948 nvme3n1 : 1.02 13042.84 50.95 0.00 0.00 9721.71 4184.22 24802.86 00:17:26.948 [2024-11-20T16:05:25.198Z] =================================================================================================================== 00:17:26.948 [2024-11-20T16:05:25.198Z] Total : 79282.04 309.70 0.00 0.00 9645.16 3528.86 25306.98 00:17:27.519 00:17:27.519 real 0m2.448s 00:17:27.519 user 0m1.816s 00:17:27.519 sys 0m0.435s 00:17:27.519 ************************************ 00:17:27.519 END TEST bdev_write_zeroes 00:17:27.519 ************************************ 00:17:27.519 16:05:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.519 16:05:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:27.780 16:05:25 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.780 16:05:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:27.780 16:05:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.780 16:05:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.780 ************************************ 00:17:27.780 START TEST bdev_json_nonenclosed 00:17:27.780 ************************************ 00:17:27.780 16:05:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.780 [2024-11-20 16:05:25.858959] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:27.780 [2024-11-20 16:05:25.859069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73210 ] 00:17:27.780 [2024-11-20 16:05:26.018970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.041 [2024-11-20 16:05:26.119349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.041 [2024-11-20 16:05:26.119426] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:28.041 [2024-11-20 16:05:26.119443] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:28.041 [2024-11-20 16:05:26.119453] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:28.301 00:17:28.301 real 0m0.498s 00:17:28.302 user 0m0.307s 00:17:28.302 sys 0m0.086s 00:17:28.302 16:05:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.302 16:05:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:28.302 ************************************ 00:17:28.302 END TEST bdev_json_nonenclosed 00:17:28.302 ************************************ 00:17:28.302 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:28.302 16:05:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:28.302 16:05:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.302 16:05:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:28.302 ************************************ 00:17:28.302 START TEST bdev_json_nonarray 00:17:28.302 ************************************ 00:17:28.302 16:05:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:28.302 [2024-11-20 16:05:26.411470] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:28.302 [2024-11-20 16:05:26.411591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73236 ] 00:17:28.563 [2024-11-20 16:05:26.571717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.563 [2024-11-20 16:05:26.671246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.563 [2024-11-20 16:05:26.671342] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:28.563 [2024-11-20 16:05:26.671360] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:28.563 [2024-11-20 16:05:26.671369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:28.825 00:17:28.825 real 0m0.502s 00:17:28.825 user 0m0.299s 00:17:28.825 sys 0m0.098s 00:17:28.825 16:05:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.825 ************************************ 00:17:28.825 END TEST bdev_json_nonarray 00:17:28.825 ************************************ 00:17:28.825 16:05:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:28.825 16:05:26 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:29.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.173 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.173 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.173 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.173 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.173 00:18:16.173 real 1m34.830s 00:18:16.173 user 1m25.759s 00:18:16.174 sys 1m34.398s 00:18:16.174 16:06:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.174 16:06:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:16.174 ************************************ 00:18:16.174 END TEST blockdev_xnvme 00:18:16.174 ************************************ 00:18:16.174 16:06:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:16.174 16:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.174 16:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.174 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:16.174 ************************************ 00:18:16.174 START TEST ublk 00:18:16.174 ************************************ 00:18:16.174 16:06:13 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:16.174 * Looking for test storage... 00:18:16.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.174 16:06:14 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.174 16:06:14 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.174 16:06:14 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.174 16:06:14 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.174 16:06:14 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.174 16:06:14 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:16.174 16:06:14 ublk -- scripts/common.sh@345 -- # : 1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.174 16:06:14 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.174 16:06:14 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@353 -- # local d=1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.174 16:06:14 ublk -- scripts/common.sh@355 -- # echo 1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.174 16:06:14 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@353 -- # local d=2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.174 16:06:14 ublk -- scripts/common.sh@355 -- # echo 2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.174 16:06:14 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.174 16:06:14 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.174 16:06:14 ublk -- scripts/common.sh@368 -- # return 0 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.174 --rc genhtml_branch_coverage=1 00:18:16.174 --rc genhtml_function_coverage=1 00:18:16.174 --rc genhtml_legend=1 00:18:16.174 --rc geninfo_all_blocks=1 00:18:16.174 --rc geninfo_unexecuted_blocks=1 00:18:16.174 00:18:16.174 ' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.174 --rc genhtml_branch_coverage=1 00:18:16.174 --rc genhtml_function_coverage=1 00:18:16.174 --rc genhtml_legend=1 00:18:16.174 --rc geninfo_all_blocks=1 00:18:16.174 --rc geninfo_unexecuted_blocks=1 00:18:16.174 00:18:16.174 ' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.174 --rc genhtml_branch_coverage=1 00:18:16.174 --rc genhtml_function_coverage=1 00:18:16.174 --rc genhtml_legend=1 00:18:16.174 --rc geninfo_all_blocks=1 00:18:16.174 --rc geninfo_unexecuted_blocks=1 00:18:16.174 00:18:16.174 ' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.174 --rc genhtml_branch_coverage=1 00:18:16.174 --rc genhtml_function_coverage=1 00:18:16.174 --rc genhtml_legend=1 00:18:16.174 --rc geninfo_all_blocks=1 00:18:16.174 --rc geninfo_unexecuted_blocks=1 00:18:16.174 00:18:16.174 ' 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:16.174 16:06:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:16.174 16:06:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:16.174 16:06:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:16.174 16:06:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:16.174 16:06:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:16.174 16:06:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:16.174 16:06:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:16.174 16:06:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:16.174 16:06:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.174 16:06:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.174 ************************************ 00:18:16.174 START TEST test_save_ublk_config 00:18:16.174 ************************************ 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73548 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73548 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73548 ']' 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.174 16:06:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:16.174 [2024-11-20 16:06:14.167266] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:16.174 [2024-11-20 16:06:14.167388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73548 ] 00:18:16.174 [2024-11-20 16:06:14.327825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.435 [2024-11-20 16:06:14.426350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.001 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:17.001 [2024-11-20 16:06:15.044755] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:17.001 [2024-11-20 16:06:15.045559] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:17.001 malloc0 00:18:17.001 [2024-11-20 16:06:15.108869] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:17.001 [2024-11-20 16:06:15.108955] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:17.001 [2024-11-20 16:06:15.108965] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:17.001 [2024-11-20 16:06:15.108972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:17.001 [2024-11-20 16:06:15.117822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:17.001 [2024-11-20 16:06:15.117843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:17.001 [2024-11-20 16:06:15.124752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:17.001 [2024-11-20 16:06:15.124868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:17.001 [2024-11-20 16:06:15.141748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:17.001 0 00:18:17.002 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.002 16:06:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:17.002 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.002 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:17.260 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.260 16:06:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:17.260 "subsystems": [ 00:18:17.260 { 00:18:17.260 "subsystem": "fsdev", 00:18:17.260 "config": [ 00:18:17.260 { 00:18:17.260 "method": "fsdev_set_opts", 00:18:17.260 "params": { 00:18:17.260 "fsdev_io_pool_size": 65535, 00:18:17.260 "fsdev_io_cache_size": 256 00:18:17.260 } 00:18:17.260 } 00:18:17.260 ] 00:18:17.260 }, 00:18:17.260 { 00:18:17.260 "subsystem": "keyring", 00:18:17.260 "config": [] 00:18:17.260 }, 00:18:17.260 { 00:18:17.260 "subsystem": "iobuf", 00:18:17.260 "config": [ 00:18:17.260 { 00:18:17.260 "method": "iobuf_set_options", 00:18:17.260 "params": { 00:18:17.260 "small_pool_count": 8192, 00:18:17.260 "large_pool_count": 1024, 00:18:17.260 "small_bufsize": 8192, 00:18:17.260 "large_bufsize": 135168, 00:18:17.260 "enable_numa": false 00:18:17.260 } 00:18:17.260 } 00:18:17.260 ] 00:18:17.260 }, 00:18:17.260 { 00:18:17.260 "subsystem": "sock", 00:18:17.260 "config": [ 00:18:17.260 { 00:18:17.260 "method": "sock_set_default_impl", 00:18:17.260 "params": { 00:18:17.260 "impl_name": "posix" 00:18:17.260 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "sock_impl_set_options", 00:18:17.261 "params": { 00:18:17.261 "impl_name": "ssl", 00:18:17.261 "recv_buf_size": 4096, 00:18:17.261 "send_buf_size": 4096, 00:18:17.261 "enable_recv_pipe": true, 00:18:17.261 "enable_quickack": false, 00:18:17.261 "enable_placement_id": 0, 00:18:17.261 "enable_zerocopy_send_server": true, 00:18:17.261 "enable_zerocopy_send_client": false, 00:18:17.261 "zerocopy_threshold": 0, 00:18:17.261 "tls_version": 0, 00:18:17.261 "enable_ktls": false 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "sock_impl_set_options", 00:18:17.261 "params": { 00:18:17.261 "impl_name": "posix", 00:18:17.261 "recv_buf_size": 2097152, 00:18:17.261 "send_buf_size": 2097152, 00:18:17.261 "enable_recv_pipe": true, 00:18:17.261 "enable_quickack": false, 00:18:17.261 "enable_placement_id": 0, 00:18:17.261 "enable_zerocopy_send_server": true, 00:18:17.261 "enable_zerocopy_send_client": false, 00:18:17.261 "zerocopy_threshold": 0, 00:18:17.261 "tls_version": 0, 00:18:17.261 "enable_ktls": false 00:18:17.261 } 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "vmd", 00:18:17.261 "config": [] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "accel", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "accel_set_options", 00:18:17.261 "params": { 00:18:17.261 "small_cache_size": 128, 00:18:17.261 "large_cache_size": 16, 00:18:17.261 "task_count": 2048, 00:18:17.261 "sequence_count": 2048, 00:18:17.261 "buf_count": 2048 00:18:17.261 } 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "bdev", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "bdev_set_options", 00:18:17.261 "params": { 00:18:17.261 "bdev_io_pool_size": 65535, 00:18:17.261 "bdev_io_cache_size": 256, 00:18:17.261 "bdev_auto_examine": true, 00:18:17.261 "iobuf_small_cache_size": 128, 00:18:17.261 "iobuf_large_cache_size": 16 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_raid_set_options", 00:18:17.261 "params": { 00:18:17.261 "process_window_size_kb": 1024, 00:18:17.261 "process_max_bandwidth_mb_sec": 0 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_iscsi_set_options", 00:18:17.261 "params": { 00:18:17.261 "timeout_sec": 30 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_nvme_set_options", 00:18:17.261 "params": { 00:18:17.261 "action_on_timeout": "none", 00:18:17.261 "timeout_us": 0, 00:18:17.261 "timeout_admin_us": 0, 00:18:17.261 "keep_alive_timeout_ms": 10000, 00:18:17.261 "arbitration_burst": 0, 00:18:17.261 "low_priority_weight": 0, 00:18:17.261 "medium_priority_weight": 0, 00:18:17.261 "high_priority_weight": 0, 00:18:17.261 "nvme_adminq_poll_period_us": 10000, 00:18:17.261 "nvme_ioq_poll_period_us": 0, 00:18:17.261 "io_queue_requests": 0, 00:18:17.261 "delay_cmd_submit": true, 00:18:17.261 "transport_retry_count": 4, 00:18:17.261 "bdev_retry_count": 3, 00:18:17.261 "transport_ack_timeout": 0, 00:18:17.261 "ctrlr_loss_timeout_sec": 0, 00:18:17.261 "reconnect_delay_sec": 0, 00:18:17.261 "fast_io_fail_timeout_sec": 0, 00:18:17.261 "disable_auto_failback": false, 00:18:17.261 "generate_uuids": false, 00:18:17.261 "transport_tos": 0, 00:18:17.261 "nvme_error_stat": false, 00:18:17.261 "rdma_srq_size": 0, 00:18:17.261 "io_path_stat": false, 00:18:17.261 "allow_accel_sequence": false, 00:18:17.261 "rdma_max_cq_size": 0, 00:18:17.261 "rdma_cm_event_timeout_ms": 0, 00:18:17.261 "dhchap_digests": [ 00:18:17.261 "sha256", 00:18:17.261 "sha384", 00:18:17.261 "sha512" 00:18:17.261 ], 00:18:17.261 "dhchap_dhgroups": [ 00:18:17.261 "null", 00:18:17.261 "ffdhe2048", 00:18:17.261 "ffdhe3072", 00:18:17.261 "ffdhe4096", 00:18:17.261 "ffdhe6144", 00:18:17.261 "ffdhe8192" 00:18:17.261 ] 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_nvme_set_hotplug", 00:18:17.261 "params": { 00:18:17.261 "period_us": 100000, 00:18:17.261 "enable": false 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_malloc_create", 00:18:17.261 "params": { 00:18:17.261 "name": "malloc0", 00:18:17.261 "num_blocks": 8192, 00:18:17.261 "block_size": 4096, 00:18:17.261 "physical_block_size": 4096, 00:18:17.261 "uuid": "66e802bc-1d71-4a6d-bb4b-509ac289692a", 00:18:17.261 "optimal_io_boundary": 0, 00:18:17.261 "md_size": 0, 00:18:17.261 "dif_type": 0, 00:18:17.261 "dif_is_head_of_md": false, 00:18:17.261 "dif_pi_format": 0 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "bdev_wait_for_examine" 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "scsi", 00:18:17.261 "config": null 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "scheduler", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "framework_set_scheduler", 00:18:17.261 "params": { 00:18:17.261 "name": "static" 00:18:17.261 } 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "vhost_scsi", 00:18:17.261 "config": [] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "vhost_blk", 00:18:17.261 "config": [] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "ublk", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "ublk_create_target", 00:18:17.261 "params": { 00:18:17.261 "cpumask": "1" 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "ublk_start_disk", 00:18:17.261 "params": { 00:18:17.261 "bdev_name": "malloc0", 00:18:17.261 "ublk_id": 0, 00:18:17.261 "num_queues": 1, 00:18:17.261 "queue_depth": 128 00:18:17.261 } 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "nbd", 00:18:17.261 "config": [] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "nvmf", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "nvmf_set_config", 00:18:17.261 "params": { 00:18:17.261 "discovery_filter": "match_any", 00:18:17.261 "admin_cmd_passthru": { 00:18:17.261 "identify_ctrlr": false 00:18:17.261 }, 00:18:17.261 "dhchap_digests": [ 00:18:17.261 "sha256", 00:18:17.261 "sha384", 00:18:17.261 "sha512" 00:18:17.261 ], 00:18:17.261 "dhchap_dhgroups": [ 00:18:17.261 "null", 00:18:17.261 "ffdhe2048", 00:18:17.261 "ffdhe3072", 00:18:17.261 "ffdhe4096", 00:18:17.261 "ffdhe6144", 00:18:17.261 "ffdhe8192" 00:18:17.261 ] 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "nvmf_set_max_subsystems", 00:18:17.261 "params": { 00:18:17.261 "max_subsystems": 1024 00:18:17.261 } 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "method": "nvmf_set_crdt", 00:18:17.261 "params": { 00:18:17.261 "crdt1": 0, 00:18:17.261 "crdt2": 0, 00:18:17.261 "crdt3": 0 00:18:17.261 } 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "subsystem": "iscsi", 00:18:17.261 "config": [ 00:18:17.261 { 00:18:17.261 "method": "iscsi_set_options", 00:18:17.261 "params": { 00:18:17.261 "node_base": "iqn.2016-06.io.spdk", 00:18:17.261 "max_sessions": 128, 00:18:17.261 "max_connections_per_session": 2, 00:18:17.261 "max_queue_depth": 64, 00:18:17.262 "default_time2wait": 2, 00:18:17.262 "default_time2retain": 20, 00:18:17.262 "first_burst_length": 8192, 00:18:17.262 "immediate_data": true, 00:18:17.262 "allow_duplicated_isid": false, 00:18:17.262 "error_recovery_level": 0, 00:18:17.262 "nop_timeout": 60, 00:18:17.262 "nop_in_interval": 30, 00:18:17.262 "disable_chap": false, 00:18:17.262 "require_chap": false, 00:18:17.262 "mutual_chap": false, 00:18:17.262 "chap_group": 0, 00:18:17.262 "max_large_datain_per_connection": 64, 00:18:17.262 "max_r2t_per_connection": 4, 00:18:17.262 "pdu_pool_size": 36864, 00:18:17.262 "immediate_data_pool_size": 16384, 00:18:17.262 "data_out_pool_size": 2048 00:18:17.262 } 00:18:17.262 } 00:18:17.262 ] 00:18:17.262 } 00:18:17.262 ] 00:18:17.262 }' 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73548 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73548 ']' 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73548 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73548 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.262 killing process with pid 73548 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73548' 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73548 00:18:17.262 16:06:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73548 00:18:18.751 [2024-11-20 16:06:16.845272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:18.751 [2024-11-20 16:06:16.875825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:18.751 [2024-11-20 16:06:16.875946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:18.751 [2024-11-20 16:06:16.883750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:18.751 [2024-11-20 16:06:16.883799] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:18.751 [2024-11-20 16:06:16.883811] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:18.751 [2024-11-20 16:06:16.883834] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:18.751 [2024-11-20 16:06:16.883972] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73603 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73603 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73603 ']' 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:20.124 16:06:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:20.124 "subsystems": [ 00:18:20.124 { 00:18:20.124 "subsystem": "fsdev", 00:18:20.124 "config": [ 00:18:20.124 { 00:18:20.124 "method": "fsdev_set_opts", 00:18:20.124 "params": { 00:18:20.124 "fsdev_io_pool_size": 65535, 00:18:20.124 "fsdev_io_cache_size": 256 00:18:20.124 } 00:18:20.124 } 00:18:20.124 ] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "keyring", 00:18:20.124 "config": [] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "iobuf", 00:18:20.124 "config": [ 00:18:20.124 { 00:18:20.124 "method": "iobuf_set_options", 00:18:20.124 "params": { 00:18:20.124 "small_pool_count": 8192, 00:18:20.124 "large_pool_count": 1024, 00:18:20.124 "small_bufsize": 8192, 00:18:20.124 "large_bufsize": 135168, 00:18:20.124 "enable_numa": false 00:18:20.124 } 00:18:20.124 } 00:18:20.124 ] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "sock", 00:18:20.124 "config": [ 00:18:20.124 { 00:18:20.124 "method": "sock_set_default_impl", 00:18:20.124 "params": { 00:18:20.124 "impl_name": "posix" 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "method": "sock_impl_set_options", 00:18:20.124 "params": { 00:18:20.124 "impl_name": "ssl", 00:18:20.124 "recv_buf_size": 4096, 00:18:20.124 "send_buf_size": 4096, 00:18:20.124 "enable_recv_pipe": true, 00:18:20.124 "enable_quickack": false, 00:18:20.124 "enable_placement_id": 0, 00:18:20.124 "enable_zerocopy_send_server": true, 00:18:20.124 "enable_zerocopy_send_client": false, 00:18:20.124 "zerocopy_threshold": 0, 00:18:20.124 "tls_version": 0, 00:18:20.124 "enable_ktls": false 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "method": "sock_impl_set_options", 00:18:20.124 "params": { 00:18:20.124 "impl_name": "posix", 00:18:20.124 "recv_buf_size": 2097152, 00:18:20.124 "send_buf_size": 2097152, 00:18:20.124 "enable_recv_pipe": true, 00:18:20.124 "enable_quickack": false, 00:18:20.124 "enable_placement_id": 0, 00:18:20.124 "enable_zerocopy_send_server": true, 00:18:20.124 "enable_zerocopy_send_client": false, 00:18:20.124 "zerocopy_threshold": 0, 00:18:20.124 "tls_version": 0, 00:18:20.124 "enable_ktls": false 00:18:20.124 } 00:18:20.124 } 00:18:20.124 ] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "vmd", 00:18:20.124 "config": [] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "accel", 00:18:20.124 "config": [ 00:18:20.124 { 00:18:20.124 "method": "accel_set_options", 00:18:20.124 "params": { 00:18:20.124 "small_cache_size": 128, 00:18:20.124 "large_cache_size": 16, 00:18:20.124 "task_count": 2048, 00:18:20.124 "sequence_count": 2048, 00:18:20.124 "buf_count": 2048 00:18:20.124 } 00:18:20.124 } 00:18:20.124 ] 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "subsystem": "bdev", 00:18:20.124 "config": [ 00:18:20.124 { 00:18:20.124 "method": "bdev_set_options", 00:18:20.124 "params": { 00:18:20.124 "bdev_io_pool_size": 65535, 00:18:20.124 "bdev_io_cache_size": 256, 00:18:20.124 "bdev_auto_examine": true, 00:18:20.124 "iobuf_small_cache_size": 128, 00:18:20.124 "iobuf_large_cache_size": 16 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "method": "bdev_raid_set_options", 00:18:20.124 "params": { 00:18:20.124 "process_window_size_kb": 1024, 00:18:20.124 "process_max_bandwidth_mb_sec": 0 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "method": "bdev_iscsi_set_options", 00:18:20.124 "params": { 00:18:20.124 "timeout_sec": 30 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.124 "method": "bdev_nvme_set_options", 00:18:20.124 "params": { 00:18:20.124 "action_on_timeout": "none", 00:18:20.124 "timeout_us": 0, 00:18:20.124 "timeout_admin_us": 0, 00:18:20.124 "keep_alive_timeout_ms": 10000, 00:18:20.124 "arbitration_burst": 0, 00:18:20.124 "low_priority_weight": 0, 00:18:20.124 "medium_priority_weight": 0, 00:18:20.124 "high_priority_weight": 0, 00:18:20.124 "nvme_adminq_poll_period_us": 10000, 00:18:20.124 "nvme_ioq_poll_period_us": 0, 00:18:20.124 "io_queue_requests": 0, 00:18:20.124 "delay_cmd_submit": true, 00:18:20.124 "transport_retry_count": 4, 00:18:20.124 "bdev_retry_count": 3, 00:18:20.124 "transport_ack_timeout": 0, 00:18:20.124 "ctrlr_loss_timeout_sec": 0, 00:18:20.124 "reconnect_delay_sec": 0, 00:18:20.124 "fast_io_fail_timeout_sec": 0, 00:18:20.124 "disable_auto_failback": false, 00:18:20.124 "generate_uuids": false, 00:18:20.124 "transport_tos": 0, 00:18:20.124 "nvme_error_stat": false, 00:18:20.124 "rdma_srq_size": 0, 00:18:20.124 "io_path_stat": false, 00:18:20.124 "allow_accel_sequence": false, 00:18:20.124 "rdma_max_cq_size": 0, 00:18:20.124 "rdma_cm_event_timeout_ms": 0, 00:18:20.124 "dhchap_digests": [ 00:18:20.124 "sha256", 00:18:20.124 "sha384", 00:18:20.124 "sha512" 00:18:20.124 ], 00:18:20.124 "dhchap_dhgroups": [ 00:18:20.124 "null", 00:18:20.124 "ffdhe2048", 00:18:20.124 "ffdhe3072", 00:18:20.124 "ffdhe4096", 00:18:20.124 "ffdhe6144", 00:18:20.124 "ffdhe8192" 00:18:20.124 ] 00:18:20.124 } 00:18:20.124 }, 00:18:20.124 { 00:18:20.125 "method": "bdev_nvme_set_hotplug", 00:18:20.125 "params": { 00:18:20.125 "period_us": 100000, 00:18:20.125 "enable": false 00:18:20.125 } 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "method": "bdev_malloc_create", 00:18:20.125 "params": { 00:18:20.125 "name": "malloc0", 00:18:20.125 "num_blocks": 8192, 00:18:20.125 "block_size": 4096, 00:18:20.125 "physical_block_size": 4096, 00:18:20.125 "uuid": "66e802bc-1d71-4a6d-bb4b-509ac289692a", 00:18:20.125 "optimal_io_boundary": 0, 00:18:20.125 "md_size": 0, 00:18:20.125 "dif_type": 0, 00:18:20.125 "dif_is_head_of_md": false, 00:18:20.125 "dif_pi_format": 0 00:18:20.125 } 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "method": "bdev_wait_for_examine" 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "scsi", 00:18:20.125 "config": null 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "scheduler", 00:18:20.125 "config": [ 00:18:20.125 { 00:18:20.125 "method": "framework_set_scheduler", 00:18:20.125 "params": { 00:18:20.125 "name": "static" 00:18:20.125 } 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "vhost_scsi", 00:18:20.125 "config": [] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "vhost_blk", 00:18:20.125 "config": [] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "ublk", 00:18:20.125 "config": [ 00:18:20.125 { 00:18:20.125 "method": "ublk_create_target", 00:18:20.125 "params": { 00:18:20.125 "cpumask": "1" 00:18:20.125 } 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "method": "ublk_start_disk", 00:18:20.125 "params": { 00:18:20.125 "bdev_name": "malloc0", 00:18:20.125 "ublk_id": 0, 00:18:20.125 "num_queues": 1, 00:18:20.125 "queue_depth": 128 00:18:20.125 } 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "nbd", 00:18:20.125 "config": [] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "nvmf", 00:18:20.125 "config": [ 00:18:20.125 { 00:18:20.125 "method": "nvmf_set_config", 00:18:20.125 "params": { 00:18:20.125 "discovery_filter": "match_any", 00:18:20.125 "admin_cmd_passthru": { 00:18:20.125 "identify_ctrlr": false 00:18:20.125 }, 00:18:20.125 "dhchap_digests": [ 00:18:20.125 "sha256", 00:18:20.125 "sha384", 00:18:20.125 "sha512" 00:18:20.125 ], 00:18:20.125 "dhchap_dhgroups": [ 00:18:20.125 "null", 00:18:20.125 "ffdhe2048", 00:18:20.125 "ffdhe3072", 00:18:20.125 "ffdhe4096", 00:18:20.125 "ffdhe6144", 00:18:20.125 "ffdhe8192" 00:18:20.125 ] 00:18:20.125 } 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "method": "nvmf_set_max_subsystems", 00:18:20.125 "params": { 00:18:20.125 "max_subsystems": 1024 00:18:20.125 } 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "method": "nvmf_set_crdt", 00:18:20.125 "params": { 00:18:20.125 "crdt1": 0, 00:18:20.125 "crdt2": 0, 00:18:20.125 "crdt3": 0 00:18:20.125 } 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 }, 00:18:20.125 { 00:18:20.125 "subsystem": "iscsi", 00:18:20.125 "config": [ 00:18:20.125 { 00:18:20.125 "method": "iscsi_set_options", 00:18:20.125 "params": { 00:18:20.125 "node_base": "iqn.2016-06.io.spdk", 00:18:20.125 "max_sessions": 128, 00:18:20.125 "max_connections_per_session": 2, 00:18:20.125 "max_queue_depth": 64, 00:18:20.125 "default_time2wait": 2, 00:18:20.125 "default_time2retain": 20, 00:18:20.125 "first_burst_length": 8192, 00:18:20.125 "immediate_data": true, 00:18:20.125 "allow_duplicated_isid": false, 00:18:20.125 "error_recovery_level": 0, 00:18:20.125 "nop_timeout": 60, 00:18:20.125 "nop_in_interval": 30, 00:18:20.125 "disable_chap": false, 00:18:20.125 "require_chap": false, 00:18:20.125 "mutual_chap": false, 00:18:20.125 "chap_group": 0, 00:18:20.125 "max_large_datain_per_connection": 64, 00:18:20.125 "max_r2t_per_connection": 4, 00:18:20.125 "pdu_pool_size": 36864, 00:18:20.125 "immediate_data_pool_size": 16384, 00:18:20.125 "data_out_pool_size": 2048 00:18:20.125 } 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 } 00:18:20.125 ] 00:18:20.125 }' 00:18:20.125 [2024-11-20 16:06:18.257496] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:20.125 [2024-11-20 16:06:18.257611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:18:20.383 [2024-11-20 16:06:18.412310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.383 [2024-11-20 16:06:18.492156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.946 [2024-11-20 16:06:19.146738] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:20.947 [2024-11-20 16:06:19.147402] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:20.947 [2024-11-20 16:06:19.154827] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:20.947 [2024-11-20 16:06:19.154885] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:20.947 [2024-11-20 16:06:19.154892] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:20.947 [2024-11-20 16:06:19.154899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:20.947 [2024-11-20 16:06:19.163791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:20.947 [2024-11-20 16:06:19.163809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:20.947 [2024-11-20 16:06:19.170745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:20.947 [2024-11-20 16:06:19.170820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:20.947 [2024-11-20 16:06:19.187747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:21.202 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73603 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73603 ']' 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73603 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73603 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.203 killing process with pid 73603 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73603' 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73603 00:18:21.203 16:06:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73603 00:18:22.185 [2024-11-20 16:06:20.288710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.185 [2024-11-20 16:06:20.319809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.185 [2024-11-20 16:06:20.319913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:22.185 [2024-11-20 16:06:20.327749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:22.185 [2024-11-20 16:06:20.327786] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:22.185 [2024-11-20 16:06:20.327792] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:22.185 [2024-11-20 16:06:20.327814] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:22.185 [2024-11-20 16:06:20.327925] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:23.562 16:06:21 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:23.562 00:18:23.562 real 0m7.417s 00:18:23.562 user 0m4.934s 00:18:23.562 sys 0m3.109s 00:18:23.562 16:06:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.562 ************************************ 00:18:23.562 END TEST test_save_ublk_config 00:18:23.562 ************************************ 00:18:23.562 16:06:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:23.562 16:06:21 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73674 00:18:23.562 16:06:21 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:23.562 16:06:21 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.562 16:06:21 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73674 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@835 -- # '[' -z 73674 ']' 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.562 16:06:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.562 [2024-11-20 16:06:21.616438] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:23.562 [2024-11-20 16:06:21.616901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73674 ] 00:18:23.562 [2024-11-20 16:06:21.771538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.821 [2024-11-20 16:06:21.852182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.821 [2024-11-20 16:06:21.852268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.387 16:06:22 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.387 16:06:22 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:24.387 16:06:22 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:24.387 16:06:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.387 16:06:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.387 16:06:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.387 ************************************ 00:18:24.387 START TEST test_create_ublk 00:18:24.387 ************************************ 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:24.387 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.387 [2024-11-20 16:06:22.469742] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:24.387 [2024-11-20 16:06:22.471331] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.387 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:24.387 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.387 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:24.387 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.387 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.387 [2024-11-20 16:06:22.634851] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:24.387 [2024-11-20 16:06:22.635160] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:24.387 [2024-11-20 16:06:22.635174] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:24.387 [2024-11-20 16:06:22.635179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:24.646 [2024-11-20 16:06:22.643919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:24.646 [2024-11-20 16:06:22.643937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:24.646 [2024-11-20 16:06:22.650745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:24.646 [2024-11-20 16:06:22.651232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:24.646 [2024-11-20 16:06:22.665746] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:24.646 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.646 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:24.647 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.647 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.647 16:06:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:24.647 { 00:18:24.647 "ublk_device": "/dev/ublkb0", 00:18:24.647 "id": 0, 00:18:24.647 "queue_depth": 512, 00:18:24.647 "num_queues": 4, 00:18:24.647 "bdev_name": "Malloc0" 00:18:24.647 } 00:18:24.647 ]' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:24.647 16:06:22 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:24.929 fio: verification read phase will never start because write phase uses all of runtime 00:18:24.929 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:24.929 fio-3.35 00:18:24.929 Starting 1 process 00:18:34.932 00:18:34.932 fio_test: (groupid=0, jobs=1): err= 0: pid=73715: Wed Nov 20 16:06:33 2024 00:18:34.932 write: IOPS=17.9k, BW=70.1MiB/s (73.5MB/s)(701MiB/10001msec); 0 zone resets 00:18:34.932 clat (usec): min=35, max=11714, avg=54.94, stdev=116.09 00:18:34.932 lat (usec): min=35, max=11732, avg=55.40, stdev=116.11 00:18:34.932 clat percentiles (usec): 00:18:34.932 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:18:34.932 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 51], 00:18:34.932 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 57], 95.00th=[ 61], 00:18:34.932 | 99.00th=[ 71], 99.50th=[ 77], 99.90th=[ 2638], 99.95th=[ 3326], 00:18:34.932 | 99.99th=[ 3687] 00:18:34.932 bw ( KiB/s): min=32872, max=77984, per=99.69%, avg=71531.89, stdev=9671.54, samples=19 00:18:34.932 iops : min= 8218, max=19496, avg=17882.95, stdev=2417.89, samples=19 00:18:34.932 lat (usec) : 50=51.96%, 100=47.74%, 250=0.09%, 500=0.04%, 750=0.01% 00:18:34.932 lat (usec) : 1000=0.01% 00:18:34.932 lat (msec) : 2=0.04%, 4=0.12%, 20=0.01% 00:18:34.932 cpu : usr=3.10%, sys=14.11%, ctx=179408, majf=0, minf=797 00:18:34.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.932 issued rwts: total=0,179406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.932 00:18:34.932 Run status group 0 (all jobs): 00:18:34.932 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=701MiB (735MB), run=10001-10001msec 00:18:34.932 00:18:34.932 Disk stats (read/write): 00:18:34.932 ublkb0: ios=0/177491, merge=0/0, ticks=0/8284, in_queue=8285, util=99.10% 00:18:34.932 16:06:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:34.932 [2024-11-20 16:06:33.091677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:34.932 [2024-11-20 16:06:33.133787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:34.932 [2024-11-20 16:06:33.134378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:34.932 [2024-11-20 16:06:33.142773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:34.932 [2024-11-20 16:06:33.143007] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:34.932 [2024-11-20 16:06:33.143022] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.932 16:06:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.932 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:34.933 [2024-11-20 16:06:33.157808] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:34.933 request: 00:18:34.933 { 00:18:34.933 "ublk_id": 0, 00:18:34.933 "method": "ublk_stop_disk", 00:18:34.933 "req_id": 1 00:18:34.933 } 00:18:34.933 Got JSON-RPC error response 00:18:34.933 response: 00:18:34.933 { 00:18:34.933 "code": -19, 00:18:34.933 "message": "No such device" 00:18:34.933 } 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.933 16:06:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.933 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.191 [2024-11-20 16:06:33.181816] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:35.191 [2024-11-20 16:06:33.185379] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:35.191 [2024-11-20 16:06:33.185416] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:35.191 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.191 16:06:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:35.191 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.191 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.449 16:06:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:35.449 16:06:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:35.449 00:18:35.449 real 0m11.187s 00:18:35.449 user 0m0.614s 00:18:35.449 sys 0m1.496s 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.449 16:06:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.449 ************************************ 00:18:35.449 END TEST test_create_ublk 00:18:35.449 ************************************ 00:18:35.449 16:06:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:35.449 16:06:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:35.449 16:06:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.449 16:06:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.449 ************************************ 00:18:35.449 START TEST test_create_multi_ublk 00:18:35.449 ************************************ 00:18:35.449 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:35.449 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:35.449 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.449 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.706 [2024-11-20 16:06:33.701733] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:35.706 [2024-11-20 16:06:33.703304] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.706 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.706 [2024-11-20 16:06:33.928857] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:35.706 [2024-11-20 16:06:33.929160] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:35.706 [2024-11-20 16:06:33.929172] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:35.706 [2024-11-20 16:06:33.929181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:35.706 [2024-11-20 16:06:33.941792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:35.706 [2024-11-20 16:06:33.941814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:35.706 [2024-11-20 16:06:33.953747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:35.706 [2024-11-20 16:06:33.954270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:35.964 [2024-11-20 16:06:33.983746] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.964 16:06:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.964 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.964 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:35.964 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:35.964 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.964 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.964 [2024-11-20 16:06:34.198846] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:35.964 [2024-11-20 16:06:34.199147] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:35.964 [2024-11-20 16:06:34.199160] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:35.964 [2024-11-20 16:06:34.199165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:35.964 [2024-11-20 16:06:34.206762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:35.964 [2024-11-20 16:06:34.206780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:36.222 [2024-11-20 16:06:34.214756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:36.222 [2024-11-20 16:06:34.215278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:36.222 [2024-11-20 16:06:34.227743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.222 [2024-11-20 16:06:34.386832] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:36.222 [2024-11-20 16:06:34.387140] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:36.222 [2024-11-20 16:06:34.387152] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:36.222 [2024-11-20 16:06:34.387158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:36.222 [2024-11-20 16:06:34.394756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:36.222 [2024-11-20 16:06:34.394776] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:36.222 [2024-11-20 16:06:34.402744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:36.222 [2024-11-20 16:06:34.403253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:36.222 [2024-11-20 16:06:34.411770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.222 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.481 [2024-11-20 16:06:34.577861] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:36.481 [2024-11-20 16:06:34.578161] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:36.481 [2024-11-20 16:06:34.578175] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:36.481 [2024-11-20 16:06:34.578181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:36.481 [2024-11-20 16:06:34.585778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:36.481 [2024-11-20 16:06:34.585797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:36.481 [2024-11-20 16:06:34.593759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:36.481 [2024-11-20 16:06:34.594267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:36.481 [2024-11-20 16:06:34.598517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:36.481 { 00:18:36.481 "ublk_device": "/dev/ublkb0", 00:18:36.481 "id": 0, 00:18:36.481 "queue_depth": 512, 00:18:36.481 "num_queues": 4, 00:18:36.481 "bdev_name": "Malloc0" 00:18:36.481 }, 00:18:36.481 { 00:18:36.481 "ublk_device": "/dev/ublkb1", 00:18:36.481 "id": 1, 00:18:36.481 "queue_depth": 512, 00:18:36.481 "num_queues": 4, 00:18:36.481 "bdev_name": "Malloc1" 00:18:36.481 }, 00:18:36.481 { 00:18:36.481 "ublk_device": "/dev/ublkb2", 00:18:36.481 "id": 2, 00:18:36.481 "queue_depth": 512, 00:18:36.481 "num_queues": 4, 00:18:36.481 "bdev_name": "Malloc2" 00:18:36.481 }, 00:18:36.481 { 00:18:36.481 "ublk_device": "/dev/ublkb3", 00:18:36.481 "id": 3, 00:18:36.481 "queue_depth": 512, 00:18:36.481 "num_queues": 4, 00:18:36.481 "bdev_name": "Malloc3" 00:18:36.481 } 00:18:36.481 ]' 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:36.481 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:36.739 16:06:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:36.997 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 [2024-11-20 16:06:35.269848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.257 [2024-11-20 16:06:35.303197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.257 [2024-11-20 16:06:35.304219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.257 [2024-11-20 16:06:35.312756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.257 [2024-11-20 16:06:35.312994] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:37.257 [2024-11-20 16:06:35.313006] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 [2024-11-20 16:06:35.328815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.257 [2024-11-20 16:06:35.361205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.257 [2024-11-20 16:06:35.362186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.257 [2024-11-20 16:06:35.373780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.257 [2024-11-20 16:06:35.374018] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:37.257 [2024-11-20 16:06:35.374026] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 [2024-11-20 16:06:35.388840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.257 [2024-11-20 16:06:35.434783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.257 [2024-11-20 16:06:35.435444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.257 [2024-11-20 16:06:35.440750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.257 [2024-11-20 16:06:35.441001] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:37.257 [2024-11-20 16:06:35.441012] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.257 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 [2024-11-20 16:06:35.448837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.257 [2024-11-20 16:06:35.489106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.257 [2024-11-20 16:06:35.490165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.257 [2024-11-20 16:06:35.496750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.257 [2024-11-20 16:06:35.496976] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:37.257 [2024-11-20 16:06:35.496984] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:37.516 [2024-11-20 16:06:35.695805] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:37.516 [2024-11-20 16:06:35.699464] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:37.516 [2024-11-20 16:06:35.699498] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.516 16:06:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.082 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.082 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.082 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:38.082 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.082 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.339 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.339 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.339 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:38.339 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.339 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:38.598 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:38.856 ************************************ 00:18:38.856 END TEST test_create_multi_ublk 00:18:38.856 ************************************ 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:38.856 00:18:38.856 real 0m3.236s 00:18:38.856 user 0m0.827s 00:18:38.856 sys 0m0.152s 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.856 16:06:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.856 16:06:36 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:38.856 16:06:36 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:38.856 16:06:36 ublk -- ublk/ublk.sh@130 -- # killprocess 73674 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@954 -- # '[' -z 73674 ']' 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@958 -- # kill -0 73674 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@959 -- # uname 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73674 00:18:38.856 killing process with pid 73674 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73674' 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@973 -- # kill 73674 00:18:38.856 16:06:36 ublk -- common/autotest_common.sh@978 -- # wait 73674 00:18:39.421 [2024-11-20 16:06:37.531807] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:39.421 [2024-11-20 16:06:37.532004] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:39.986 00:18:39.986 real 0m24.264s 00:18:39.986 user 0m34.877s 00:18:39.986 sys 0m9.482s 00:18:39.986 16:06:38 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.986 16:06:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:39.986 ************************************ 00:18:39.986 END TEST ublk 00:18:39.986 ************************************ 00:18:39.986 16:06:38 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:39.986 16:06:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.986 16:06:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.244 16:06:38 -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 ************************************ 00:18:40.244 START TEST ublk_recovery 00:18:40.244 ************************************ 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:40.244 * Looking for test storage... 00:18:40.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.244 16:06:38 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.244 --rc genhtml_branch_coverage=1 00:18:40.244 --rc genhtml_function_coverage=1 00:18:40.244 --rc genhtml_legend=1 00:18:40.244 --rc geninfo_all_blocks=1 00:18:40.244 --rc geninfo_unexecuted_blocks=1 00:18:40.244 00:18:40.244 ' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.244 --rc genhtml_branch_coverage=1 00:18:40.244 --rc genhtml_function_coverage=1 00:18:40.244 --rc genhtml_legend=1 00:18:40.244 --rc geninfo_all_blocks=1 00:18:40.244 --rc geninfo_unexecuted_blocks=1 00:18:40.244 00:18:40.244 ' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.244 --rc genhtml_branch_coverage=1 00:18:40.244 --rc genhtml_function_coverage=1 00:18:40.244 --rc genhtml_legend=1 00:18:40.244 --rc geninfo_all_blocks=1 00:18:40.244 --rc geninfo_unexecuted_blocks=1 00:18:40.244 00:18:40.244 ' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.244 --rc genhtml_branch_coverage=1 00:18:40.244 --rc genhtml_function_coverage=1 00:18:40.244 --rc genhtml_legend=1 00:18:40.244 --rc geninfo_all_blocks=1 00:18:40.244 --rc geninfo_unexecuted_blocks=1 00:18:40.244 00:18:40.244 ' 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:40.244 16:06:38 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74063 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74063 00:18:40.244 16:06:38 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74063 ']' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.244 16:06:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 [2024-11-20 16:06:38.450316] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:40.244 [2024-11-20 16:06:38.450870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74063 ] 00:18:40.502 [2024-11-20 16:06:38.606921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:40.502 [2024-11-20 16:06:38.690108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.502 [2024-11-20 16:06:38.690311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:41.070 16:06:39 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.070 [2024-11-20 16:06:39.295741] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:41.070 [2024-11-20 16:06:39.297374] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.070 16:06:39 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.070 16:06:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.328 malloc0 00:18:41.328 16:06:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.328 16:06:39 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:41.328 16:06:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.328 16:06:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.328 [2024-11-20 16:06:39.380848] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:41.328 [2024-11-20 16:06:39.380939] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:41.328 [2024-11-20 16:06:39.380948] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:41.328 [2024-11-20 16:06:39.380956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:41.328 [2024-11-20 16:06:39.388750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:41.328 [2024-11-20 16:06:39.388768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:41.328 [2024-11-20 16:06:39.396743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:41.328 [2024-11-20 16:06:39.396859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:41.328 [2024-11-20 16:06:39.411753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:41.328 1 00:18:41.328 16:06:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.328 16:06:39 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:42.263 16:06:40 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74098 00:18:42.263 16:06:40 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:42.263 16:06:40 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:42.521 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:42.521 fio-3.35 00:18:42.521 Starting 1 process 00:18:47.783 16:06:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74063 00:18:47.783 16:06:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:53.050 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74063 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:53.050 16:06:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74203 00:18:53.050 16:06:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.050 16:06:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74203 00:18:53.050 16:06:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74203 ']' 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.050 16:06:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.050 [2024-11-20 16:06:50.508581] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:53.050 [2024-11-20 16:06:50.508703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74203 ] 00:18:53.050 [2024-11-20 16:06:50.662260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.050 [2024-11-20 16:06:50.762865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.050 [2024-11-20 16:06:50.762884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.307 16:06:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:53.308 16:06:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.308 [2024-11-20 16:06:51.359743] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:53.308 [2024-11-20 16:06:51.361684] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.308 16:06:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.308 malloc0 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.308 16:06:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.308 [2024-11-20 16:06:51.463881] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:53.308 [2024-11-20 16:06:51.463926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:53.308 [2024-11-20 16:06:51.463937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:53.308 [2024-11-20 16:06:51.471781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:53.308 [2024-11-20 16:06:51.471804] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:53.308 1 00:18:53.308 16:06:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.308 16:06:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74098 00:18:54.243 [2024-11-20 16:06:52.471834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:54.243 [2024-11-20 16:06:52.479748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:54.243 [2024-11-20 16:06:52.479769] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:55.633 [2024-11-20 16:06:53.479809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:55.633 [2024-11-20 16:06:53.487746] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:55.633 [2024-11-20 16:06:53.487770] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:56.568 [2024-11-20 16:06:54.487800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:56.568 [2024-11-20 16:06:54.495752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:56.568 [2024-11-20 16:06:54.495771] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:56.568 [2024-11-20 16:06:54.495780] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:56.568 [2024-11-20 16:06:54.495863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:18.486 [2024-11-20 16:07:15.810749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:18.486 [2024-11-20 16:07:15.813982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:18.486 [2024-11-20 16:07:15.817934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:18.486 [2024-11-20 16:07:15.817954] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:45.020 00:19:45.020 fio_test: (groupid=0, jobs=1): err= 0: pid=74101: Wed Nov 20 16:07:40 2024 00:19:45.020 read: IOPS=14.5k, BW=56.6MiB/s (59.4MB/s)(3399MiB/60002msec) 00:19:45.020 slat (nsec): min=1100, max=808287, avg=5009.61, stdev=1982.40 00:19:45.020 clat (usec): min=530, max=30401k, avg=4757.51, stdev=282210.35 00:19:45.020 lat (usec): min=538, max=30401k, avg=4762.52, stdev=282210.34 00:19:45.020 clat percentiles (usec): 00:19:45.020 | 1.00th=[ 1680], 5.00th=[ 1811], 10.00th=[ 1860], 20.00th=[ 1876], 00:19:45.020 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:19:45.020 | 70.00th=[ 1991], 80.00th=[ 2073], 90.00th=[ 2442], 95.00th=[ 3326], 00:19:45.020 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[11994], 00:19:45.020 | 99.99th=[12911] 00:19:45.020 bw ( KiB/s): min=41216, max=128608, per=100.00%, avg=116059.69, stdev=18318.21, samples=59 00:19:45.020 iops : min=10304, max=32152, avg=29014.92, stdev=4579.55, samples=59 00:19:45.020 write: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(3394MiB/60002msec); 0 zone resets 00:19:45.020 slat (nsec): min=1070, max=285814, avg=5038.03, stdev=1764.02 00:19:45.020 clat (usec): min=533, max=30401k, avg=4064.04, stdev=237412.22 00:19:45.020 lat (usec): min=537, max=30401k, avg=4069.08, stdev=237412.22 00:19:45.020 clat percentiles (usec): 00:19:45.020 | 1.00th=[ 1713], 5.00th=[ 1893], 10.00th=[ 1942], 20.00th=[ 1975], 00:19:45.020 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2057], 00:19:45.020 | 70.00th=[ 2073], 80.00th=[ 2147], 90.00th=[ 2507], 95.00th=[ 3261], 00:19:45.020 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 7963], 99.95th=[ 9110], 00:19:45.020 | 99.99th=[13042] 00:19:45.020 bw ( KiB/s): min=40712, max=126952, per=100.00%, avg=115897.54, stdev=18461.79, samples=59 00:19:45.020 iops : min=10178, max=31738, avg=28974.37, stdev=4615.44, samples=59 00:19:45.020 lat (usec) : 750=0.01%, 1000=0.01% 00:19:45.020 lat (msec) : 2=53.60%, 4=42.86%, 10=3.49%, 20=0.04%, >=2000=0.01% 00:19:45.020 cpu : usr=3.38%, sys=14.98%, ctx=60068, majf=0, minf=16 00:19:45.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:45.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:45.020 issued rwts: total=870097,868834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:45.020 00:19:45.020 Run status group 0 (all jobs): 00:19:45.020 READ: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=3399MiB (3564MB), run=60002-60002msec 00:19:45.020 WRITE: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=3394MiB (3559MB), run=60002-60002msec 00:19:45.020 00:19:45.020 Disk stats (read/write): 00:19:45.020 ublkb1: ios=866707/865423, merge=0/0, ticks=4083568/3405679, in_queue=7489247, util=99.91% 00:19:45.020 16:07:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.020 [2024-11-20 16:07:40.683319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:45.020 [2024-11-20 16:07:40.722758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:45.020 [2024-11-20 16:07:40.722918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:45.020 [2024-11-20 16:07:40.730743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:45.020 [2024-11-20 16:07:40.730835] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:45.020 [2024-11-20 16:07:40.730841] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.020 16:07:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.020 [2024-11-20 16:07:40.746827] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:45.020 [2024-11-20 16:07:40.750515] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:45.020 [2024-11-20 16:07:40.750548] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.020 16:07:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:45.020 16:07:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:45.020 16:07:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74203 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74203 ']' 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74203 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74203 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.020 killing process with pid 74203 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74203' 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74203 00:19:45.020 16:07:40 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74203 00:19:45.020 [2024-11-20 16:07:41.827743] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:45.020 [2024-11-20 16:07:41.827793] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:45.020 00:19:45.020 real 1m4.311s 00:19:45.020 user 1m45.776s 00:19:45.020 sys 0m23.262s 00:19:45.020 16:07:42 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.020 ************************************ 00:19:45.020 16:07:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.020 END TEST ublk_recovery 00:19:45.020 ************************************ 00:19:45.020 16:07:42 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:45.020 16:07:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:45.020 16:07:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.020 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:19:45.020 16:07:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:45.020 16:07:42 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:45.020 16:07:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:45.020 16:07:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.020 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:19:45.020 ************************************ 00:19:45.020 START TEST ftl 00:19:45.020 ************************************ 00:19:45.020 16:07:42 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:45.020 * Looking for test storage... 00:19:45.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.021 16:07:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.021 16:07:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.021 16:07:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.021 16:07:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.021 16:07:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.021 16:07:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:45.021 16:07:42 ftl -- scripts/common.sh@345 -- # : 1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.021 16:07:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.021 16:07:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@353 -- # local d=1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.021 16:07:42 ftl -- scripts/common.sh@355 -- # echo 1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.021 16:07:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@353 -- # local d=2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.021 16:07:42 ftl -- scripts/common.sh@355 -- # echo 2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.021 16:07:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.021 16:07:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.021 16:07:42 ftl -- scripts/common.sh@368 -- # return 0 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.021 --rc genhtml_branch_coverage=1 00:19:45.021 --rc genhtml_function_coverage=1 00:19:45.021 --rc genhtml_legend=1 00:19:45.021 --rc geninfo_all_blocks=1 00:19:45.021 --rc geninfo_unexecuted_blocks=1 00:19:45.021 00:19:45.021 ' 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.021 --rc genhtml_branch_coverage=1 00:19:45.021 --rc genhtml_function_coverage=1 00:19:45.021 --rc genhtml_legend=1 00:19:45.021 --rc geninfo_all_blocks=1 00:19:45.021 --rc geninfo_unexecuted_blocks=1 00:19:45.021 00:19:45.021 ' 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.021 --rc genhtml_branch_coverage=1 00:19:45.021 --rc genhtml_function_coverage=1 00:19:45.021 --rc genhtml_legend=1 00:19:45.021 --rc geninfo_all_blocks=1 00:19:45.021 --rc geninfo_unexecuted_blocks=1 00:19:45.021 00:19:45.021 ' 00:19:45.021 16:07:42 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.021 --rc genhtml_branch_coverage=1 00:19:45.021 --rc genhtml_function_coverage=1 00:19:45.021 --rc genhtml_legend=1 00:19:45.021 --rc geninfo_all_blocks=1 00:19:45.021 --rc geninfo_unexecuted_blocks=1 00:19:45.021 00:19:45.021 ' 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:45.021 16:07:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:45.021 16:07:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:45.021 16:07:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:45.021 16:07:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:45.021 16:07:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:45.021 16:07:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.021 16:07:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:45.021 16:07:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:45.021 16:07:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:45.021 16:07:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:45.021 16:07:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:45.021 16:07:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:45.021 16:07:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:45.021 16:07:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:45.021 16:07:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:45.021 16:07:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:45.021 16:07:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:45.021 16:07:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:45.021 16:07:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:45.021 16:07:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:45.021 16:07:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:45.021 16:07:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:45.021 16:07:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:45.021 16:07:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:45.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.021 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.021 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.021 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.021 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.021 16:07:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75004 00:19:45.021 16:07:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75004 00:19:45.021 16:07:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 75004 ']' 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.021 16:07:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:45.021 [2024-11-20 16:07:43.261326] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:19:45.021 [2024-11-20 16:07:43.261446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75004 ] 00:19:45.282 [2024-11-20 16:07:43.419495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.282 [2024-11-20 16:07:43.522071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.851 16:07:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.851 16:07:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:45.851 16:07:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:46.109 16:07:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:47.050 16:07:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:47.050 16:07:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:47.310 16:07:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:47.310 16:07:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:47.310 16:07:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@50 -- # break 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:47.571 16:07:45 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:47.832 16:07:45 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:47.832 16:07:45 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:47.832 16:07:45 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:47.832 16:07:45 ftl -- ftl/ftl.sh@63 -- # break 00:19:47.832 16:07:45 ftl -- ftl/ftl.sh@66 -- # killprocess 75004 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@954 -- # '[' -z 75004 ']' 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@958 -- # kill -0 75004 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@959 -- # uname 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75004 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.832 killing process with pid 75004 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75004' 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@973 -- # kill 75004 00:19:47.832 16:07:45 ftl -- common/autotest_common.sh@978 -- # wait 75004 00:19:49.318 16:07:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:49.318 16:07:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:49.318 16:07:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:49.318 16:07:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.318 16:07:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 ************************************ 00:19:49.318 START TEST ftl_fio_basic 00:19:49.318 ************************************ 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:49.318 * Looking for test storage... 00:19:49.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.318 --rc genhtml_branch_coverage=1 00:19:49.318 --rc genhtml_function_coverage=1 00:19:49.318 --rc genhtml_legend=1 00:19:49.318 --rc geninfo_all_blocks=1 00:19:49.318 --rc geninfo_unexecuted_blocks=1 00:19:49.318 00:19:49.318 ' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.318 --rc genhtml_branch_coverage=1 00:19:49.318 --rc genhtml_function_coverage=1 00:19:49.318 --rc genhtml_legend=1 00:19:49.318 --rc geninfo_all_blocks=1 00:19:49.318 --rc geninfo_unexecuted_blocks=1 00:19:49.318 00:19:49.318 ' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.318 --rc genhtml_branch_coverage=1 00:19:49.318 --rc genhtml_function_coverage=1 00:19:49.318 --rc genhtml_legend=1 00:19:49.318 --rc geninfo_all_blocks=1 00:19:49.318 --rc geninfo_unexecuted_blocks=1 00:19:49.318 00:19:49.318 ' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.318 --rc genhtml_branch_coverage=1 00:19:49.318 --rc genhtml_function_coverage=1 00:19:49.318 --rc genhtml_legend=1 00:19:49.318 --rc geninfo_all_blocks=1 00:19:49.318 --rc geninfo_unexecuted_blocks=1 00:19:49.318 00:19:49.318 ' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:49.318 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75137 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75137 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75137 ']' 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.319 16:07:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:49.319 [2024-11-20 16:07:47.479127] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:19:49.319 [2024-11-20 16:07:47.479458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75137 ] 00:19:49.579 [2024-11-20 16:07:47.641833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:49.579 [2024-11-20 16:07:47.746192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.579 [2024-11-20 16:07:47.746472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.579 [2024-11-20 16:07:47.746539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:50.520 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:50.782 { 00:19:50.782 "name": "nvme0n1", 00:19:50.782 "aliases": [ 00:19:50.782 "b9e1b507-76bb-4500-930f-254fb1c180bd" 00:19:50.782 ], 00:19:50.782 "product_name": "NVMe disk", 00:19:50.782 "block_size": 4096, 00:19:50.782 "num_blocks": 1310720, 00:19:50.782 "uuid": "b9e1b507-76bb-4500-930f-254fb1c180bd", 00:19:50.782 "numa_id": -1, 00:19:50.782 "assigned_rate_limits": { 00:19:50.782 "rw_ios_per_sec": 0, 00:19:50.782 "rw_mbytes_per_sec": 0, 00:19:50.782 "r_mbytes_per_sec": 0, 00:19:50.782 "w_mbytes_per_sec": 0 00:19:50.782 }, 00:19:50.782 "claimed": false, 00:19:50.782 "zoned": false, 00:19:50.782 "supported_io_types": { 00:19:50.782 "read": true, 00:19:50.782 "write": true, 00:19:50.782 "unmap": true, 00:19:50.782 "flush": true, 00:19:50.782 "reset": true, 00:19:50.782 "nvme_admin": true, 00:19:50.782 "nvme_io": true, 00:19:50.782 "nvme_io_md": false, 00:19:50.782 "write_zeroes": true, 00:19:50.782 "zcopy": false, 00:19:50.782 "get_zone_info": false, 00:19:50.782 "zone_management": false, 00:19:50.782 "zone_append": false, 00:19:50.782 "compare": true, 00:19:50.782 "compare_and_write": false, 00:19:50.782 "abort": true, 00:19:50.782 "seek_hole": false, 00:19:50.782 "seek_data": false, 00:19:50.782 "copy": true, 00:19:50.782 "nvme_iov_md": false 00:19:50.782 }, 00:19:50.782 "driver_specific": { 00:19:50.782 "nvme": [ 00:19:50.782 { 00:19:50.782 "pci_address": "0000:00:11.0", 00:19:50.782 "trid": { 00:19:50.782 "trtype": "PCIe", 00:19:50.782 "traddr": "0000:00:11.0" 00:19:50.782 }, 00:19:50.782 "ctrlr_data": { 00:19:50.782 "cntlid": 0, 00:19:50.782 "vendor_id": "0x1b36", 00:19:50.782 "model_number": "QEMU NVMe Ctrl", 00:19:50.782 "serial_number": "12341", 00:19:50.782 "firmware_revision": "8.0.0", 00:19:50.782 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:50.782 "oacs": { 00:19:50.782 "security": 0, 00:19:50.782 "format": 1, 00:19:50.782 "firmware": 0, 00:19:50.782 "ns_manage": 1 00:19:50.782 }, 00:19:50.782 "multi_ctrlr": false, 00:19:50.782 "ana_reporting": false 00:19:50.782 }, 00:19:50.782 "vs": { 00:19:50.782 "nvme_version": "1.4" 00:19:50.782 }, 00:19:50.782 "ns_data": { 00:19:50.782 "id": 1, 00:19:50.782 "can_share": false 00:19:50.782 } 00:19:50.782 } 00:19:50.782 ], 00:19:50.782 "mp_policy": "active_passive" 00:19:50.782 } 00:19:50.782 } 00:19:50.782 ]' 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:50.782 16:07:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:51.044 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:51.044 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:51.313 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=6c7a3521-384e-47d1-89e7-275e5384880b 00:19:51.313 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c7a3521-384e-47d1-89e7-275e5384880b 00:19:51.572 16:07:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:51.573 { 00:19:51.573 "name": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:51.573 "aliases": [ 00:19:51.573 "lvs/nvme0n1p0" 00:19:51.573 ], 00:19:51.573 "product_name": "Logical Volume", 00:19:51.573 "block_size": 4096, 00:19:51.573 "num_blocks": 26476544, 00:19:51.573 "uuid": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:51.573 "assigned_rate_limits": { 00:19:51.573 "rw_ios_per_sec": 0, 00:19:51.573 "rw_mbytes_per_sec": 0, 00:19:51.573 "r_mbytes_per_sec": 0, 00:19:51.573 "w_mbytes_per_sec": 0 00:19:51.573 }, 00:19:51.573 "claimed": false, 00:19:51.573 "zoned": false, 00:19:51.573 "supported_io_types": { 00:19:51.573 "read": true, 00:19:51.573 "write": true, 00:19:51.573 "unmap": true, 00:19:51.573 "flush": false, 00:19:51.573 "reset": true, 00:19:51.573 "nvme_admin": false, 00:19:51.573 "nvme_io": false, 00:19:51.573 "nvme_io_md": false, 00:19:51.573 "write_zeroes": true, 00:19:51.573 "zcopy": false, 00:19:51.573 "get_zone_info": false, 00:19:51.573 "zone_management": false, 00:19:51.573 "zone_append": false, 00:19:51.573 "compare": false, 00:19:51.573 "compare_and_write": false, 00:19:51.573 "abort": false, 00:19:51.573 "seek_hole": true, 00:19:51.573 "seek_data": true, 00:19:51.573 "copy": false, 00:19:51.573 "nvme_iov_md": false 00:19:51.573 }, 00:19:51.573 "driver_specific": { 00:19:51.573 "lvol": { 00:19:51.573 "lvol_store_uuid": "6c7a3521-384e-47d1-89e7-275e5384880b", 00:19:51.573 "base_bdev": "nvme0n1", 00:19:51.573 "thin_provision": true, 00:19:51.573 "num_allocated_clusters": 0, 00:19:51.573 "snapshot": false, 00:19:51.573 "clone": false, 00:19:51.573 "esnap_clone": false 00:19:51.573 } 00:19:51.573 } 00:19:51.573 } 00:19:51.573 ]' 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:51.573 16:07:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:52.141 { 00:19:52.141 "name": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:52.141 "aliases": [ 00:19:52.141 "lvs/nvme0n1p0" 00:19:52.141 ], 00:19:52.141 "product_name": "Logical Volume", 00:19:52.141 "block_size": 4096, 00:19:52.141 "num_blocks": 26476544, 00:19:52.141 "uuid": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:52.141 "assigned_rate_limits": { 00:19:52.141 "rw_ios_per_sec": 0, 00:19:52.141 "rw_mbytes_per_sec": 0, 00:19:52.141 "r_mbytes_per_sec": 0, 00:19:52.141 "w_mbytes_per_sec": 0 00:19:52.141 }, 00:19:52.141 "claimed": false, 00:19:52.141 "zoned": false, 00:19:52.141 "supported_io_types": { 00:19:52.141 "read": true, 00:19:52.141 "write": true, 00:19:52.141 "unmap": true, 00:19:52.141 "flush": false, 00:19:52.141 "reset": true, 00:19:52.141 "nvme_admin": false, 00:19:52.141 "nvme_io": false, 00:19:52.141 "nvme_io_md": false, 00:19:52.141 "write_zeroes": true, 00:19:52.141 "zcopy": false, 00:19:52.141 "get_zone_info": false, 00:19:52.141 "zone_management": false, 00:19:52.141 "zone_append": false, 00:19:52.141 "compare": false, 00:19:52.141 "compare_and_write": false, 00:19:52.141 "abort": false, 00:19:52.141 "seek_hole": true, 00:19:52.141 "seek_data": true, 00:19:52.141 "copy": false, 00:19:52.141 "nvme_iov_md": false 00:19:52.141 }, 00:19:52.141 "driver_specific": { 00:19:52.141 "lvol": { 00:19:52.141 "lvol_store_uuid": "6c7a3521-384e-47d1-89e7-275e5384880b", 00:19:52.141 "base_bdev": "nvme0n1", 00:19:52.141 "thin_provision": true, 00:19:52.141 "num_allocated_clusters": 0, 00:19:52.141 "snapshot": false, 00:19:52.141 "clone": false, 00:19:52.141 "esnap_clone": false 00:19:52.141 } 00:19:52.141 } 00:19:52.141 } 00:19:52.141 ]' 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:52.141 16:07:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:52.399 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:52.399 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d36c186-f474-4a59-a336-6584e2bcd595 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:52.657 { 00:19:52.657 "name": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:52.657 "aliases": [ 00:19:52.657 "lvs/nvme0n1p0" 00:19:52.657 ], 00:19:52.657 "product_name": "Logical Volume", 00:19:52.657 "block_size": 4096, 00:19:52.657 "num_blocks": 26476544, 00:19:52.657 "uuid": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:52.657 "assigned_rate_limits": { 00:19:52.657 "rw_ios_per_sec": 0, 00:19:52.657 "rw_mbytes_per_sec": 0, 00:19:52.657 "r_mbytes_per_sec": 0, 00:19:52.657 "w_mbytes_per_sec": 0 00:19:52.657 }, 00:19:52.657 "claimed": false, 00:19:52.657 "zoned": false, 00:19:52.657 "supported_io_types": { 00:19:52.657 "read": true, 00:19:52.657 "write": true, 00:19:52.657 "unmap": true, 00:19:52.657 "flush": false, 00:19:52.657 "reset": true, 00:19:52.657 "nvme_admin": false, 00:19:52.657 "nvme_io": false, 00:19:52.657 "nvme_io_md": false, 00:19:52.657 "write_zeroes": true, 00:19:52.657 "zcopy": false, 00:19:52.657 "get_zone_info": false, 00:19:52.657 "zone_management": false, 00:19:52.657 "zone_append": false, 00:19:52.657 "compare": false, 00:19:52.657 "compare_and_write": false, 00:19:52.657 "abort": false, 00:19:52.657 "seek_hole": true, 00:19:52.657 "seek_data": true, 00:19:52.657 "copy": false, 00:19:52.657 "nvme_iov_md": false 00:19:52.657 }, 00:19:52.657 "driver_specific": { 00:19:52.657 "lvol": { 00:19:52.657 "lvol_store_uuid": "6c7a3521-384e-47d1-89e7-275e5384880b", 00:19:52.657 "base_bdev": "nvme0n1", 00:19:52.657 "thin_provision": true, 00:19:52.657 "num_allocated_clusters": 0, 00:19:52.657 "snapshot": false, 00:19:52.657 "clone": false, 00:19:52.657 "esnap_clone": false 00:19:52.657 } 00:19:52.657 } 00:19:52.657 } 00:19:52.657 ]' 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:52.657 16:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9d36c186-f474-4a59-a336-6584e2bcd595 -c nvc0n1p0 --l2p_dram_limit 60 00:19:52.915 [2024-11-20 16:07:51.038317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.038365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.915 [2024-11-20 16:07:51.038379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.915 [2024-11-20 16:07:51.038386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.038446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.038456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.915 [2024-11-20 16:07:51.038465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:52.915 [2024-11-20 16:07:51.038471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.038491] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.915 [2024-11-20 16:07:51.039194] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.915 [2024-11-20 16:07:51.039222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.039229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.915 [2024-11-20 16:07:51.039238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:19:52.915 [2024-11-20 16:07:51.039244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.039390] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 99ac4dbc-ee1e-4f83-8e4f-cb9a50c90e54 00:19:52.915 [2024-11-20 16:07:51.040461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.040495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:52.915 [2024-11-20 16:07:51.040504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:52.915 [2024-11-20 16:07:51.040511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.045392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.045423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.915 [2024-11-20 16:07:51.045431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.833 ms 00:19:52.915 [2024-11-20 16:07:51.045440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.045536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.045552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.915 [2024-11-20 16:07:51.045560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:52.915 [2024-11-20 16:07:51.045571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.915 [2024-11-20 16:07:51.045613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.915 [2024-11-20 16:07:51.045627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.915 [2024-11-20 16:07:51.045638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.916 [2024-11-20 16:07:51.045651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.916 [2024-11-20 16:07:51.045674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.916 [2024-11-20 16:07:51.048698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.916 [2024-11-20 16:07:51.048734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.916 [2024-11-20 16:07:51.048745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:19:52.916 [2024-11-20 16:07:51.048753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.916 [2024-11-20 16:07:51.048785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.916 [2024-11-20 16:07:51.048792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.916 [2024-11-20 16:07:51.048800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:52.916 [2024-11-20 16:07:51.048805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.916 [2024-11-20 16:07:51.048841] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:52.916 [2024-11-20 16:07:51.048972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.916 [2024-11-20 16:07:51.048992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.916 [2024-11-20 16:07:51.049006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.916 [2024-11-20 16:07:51.049018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:52.916 [2024-11-20 16:07:51.049052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.916 [2024-11-20 16:07:51.049059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.916 [2024-11-20 16:07:51.049066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.916 [2024-11-20 16:07:51.049075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.916 [2024-11-20 16:07:51.049086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.916 [2024-11-20 16:07:51.049096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:19:52.916 [2024-11-20 16:07:51.049102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.916 [2024-11-20 16:07:51.049181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.916 [2024-11-20 16:07:51.049188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.916 [2024-11-20 16:07:51.049199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:52.916 [2024-11-20 16:07:51.049208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.916 [2024-11-20 16:07:51.049311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.916 [2024-11-20 16:07:51.049324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.916 [2024-11-20 16:07:51.049334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.916 [2024-11-20 16:07:51.049353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.916 [2024-11-20 16:07:51.049377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.916 [2024-11-20 16:07:51.049396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.916 [2024-11-20 16:07:51.049404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:52.916 [2024-11-20 16:07:51.049410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.916 [2024-11-20 16:07:51.049418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.916 [2024-11-20 16:07:51.049426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:52.916 [2024-11-20 16:07:51.049433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.916 [2024-11-20 16:07:51.049451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.916 [2024-11-20 16:07:51.049480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.916 [2024-11-20 16:07:51.049504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.916 [2024-11-20 16:07:51.049522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.916 [2024-11-20 16:07:51.049547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.916 [2024-11-20 16:07:51.049572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.916 [2024-11-20 16:07:51.049584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.916 [2024-11-20 16:07:51.049606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:52.916 [2024-11-20 16:07:51.049618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.916 [2024-11-20 16:07:51.049624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.916 [2024-11-20 16:07:51.049631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:52.916 [2024-11-20 16:07:51.049636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.916 [2024-11-20 16:07:51.049648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:52.916 [2024-11-20 16:07:51.049656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.916 [2024-11-20 16:07:51.049671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.916 [2024-11-20 16:07:51.049680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.916 [2024-11-20 16:07:51.049693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.916 [2024-11-20 16:07:51.049701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.916 [2024-11-20 16:07:51.049707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.916 [2024-11-20 16:07:51.049714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.916 [2024-11-20 16:07:51.049720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.916 [2024-11-20 16:07:51.049743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.916 [2024-11-20 16:07:51.049756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.917 [2024-11-20 16:07:51.049767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:52.917 [2024-11-20 16:07:51.049781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:52.917 [2024-11-20 16:07:51.049787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:52.917 [2024-11-20 16:07:51.049795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:52.917 [2024-11-20 16:07:51.049801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:52.917 [2024-11-20 16:07:51.049807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:52.917 [2024-11-20 16:07:51.049813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:52.917 [2024-11-20 16:07:51.049821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:52.917 [2024-11-20 16:07:51.049830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:52.917 [2024-11-20 16:07:51.049839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:52.917 [2024-11-20 16:07:51.049884] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.917 [2024-11-20 16:07:51.049894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.917 [2024-11-20 16:07:51.049918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.917 [2024-11-20 16:07:51.049925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.917 [2024-11-20 16:07:51.049932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.917 [2024-11-20 16:07:51.049939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.917 [2024-11-20 16:07:51.049947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.917 [2024-11-20 16:07:51.049959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:52.917 [2024-11-20 16:07:51.049967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.917 [2024-11-20 16:07:51.050049] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:52.917 [2024-11-20 16:07:51.050071] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:56.201 [2024-11-20 16:07:53.952574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:53.952638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:56.201 [2024-11-20 16:07:53.952655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2902.512 ms 00:19:56.201 [2024-11-20 16:07:53.952665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:53.978076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:53.978129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.201 [2024-11-20 16:07:53.978142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.195 ms 00:19:56.201 [2024-11-20 16:07:53.978152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:53.978293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:53.978306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.201 [2024-11-20 16:07:53.978314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:56.201 [2024-11-20 16:07:53.978325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.027587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.027639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.201 [2024-11-20 16:07:54.027656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.203 ms 00:19:56.201 [2024-11-20 16:07:54.027667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.027717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.027739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.201 [2024-11-20 16:07:54.027748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.201 [2024-11-20 16:07:54.027757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.028150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.028178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.201 [2024-11-20 16:07:54.028187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:56.201 [2024-11-20 16:07:54.028198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.028332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.028349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.201 [2024-11-20 16:07:54.028357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:56.201 [2024-11-20 16:07:54.028368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.042690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.042737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.201 [2024-11-20 16:07:54.042747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.299 ms 00:19:56.201 [2024-11-20 16:07:54.042757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.054089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:56.201 [2024-11-20 16:07:54.067986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.068037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.201 [2024-11-20 16:07:54.068049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.134 ms 00:19:56.201 [2024-11-20 16:07:54.068059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.118189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.118236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:56.201 [2024-11-20 16:07:54.118253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.092 ms 00:19:56.201 [2024-11-20 16:07:54.118262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.118444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.118454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.201 [2024-11-20 16:07:54.118467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:56.201 [2024-11-20 16:07:54.118474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.201 [2024-11-20 16:07:54.141398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.201 [2024-11-20 16:07:54.141439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:56.202 [2024-11-20 16:07:54.141452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:19:56.202 [2024-11-20 16:07:54.141461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.163783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.163818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:56.202 [2024-11-20 16:07:54.163831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.279 ms 00:19:56.202 [2024-11-20 16:07:54.163839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.164412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.164433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.202 [2024-11-20 16:07:54.164444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:19:56.202 [2024-11-20 16:07:54.164451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.229335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.229379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:56.202 [2024-11-20 16:07:54.229395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.844 ms 00:19:56.202 [2024-11-20 16:07:54.229407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.253314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.253353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:56.202 [2024-11-20 16:07:54.253366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.824 ms 00:19:56.202 [2024-11-20 16:07:54.253375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.275845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.275882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:56.202 [2024-11-20 16:07:54.275894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.424 ms 00:19:56.202 [2024-11-20 16:07:54.275903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.299235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.299271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.202 [2024-11-20 16:07:54.299284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:19:56.202 [2024-11-20 16:07:54.299292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.299342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.299351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.202 [2024-11-20 16:07:54.299367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.202 [2024-11-20 16:07:54.299374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.299456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.202 [2024-11-20 16:07:54.299465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.202 [2024-11-20 16:07:54.299475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:56.202 [2024-11-20 16:07:54.299483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.202 [2024-11-20 16:07:54.300518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3261.765 ms, result 0 00:19:56.202 { 00:19:56.202 "name": "ftl0", 00:19:56.202 "uuid": "99ac4dbc-ee1e-4f83-8e4f-cb9a50c90e54" 00:19:56.202 } 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:56.202 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:56.460 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:56.718 [ 00:19:56.718 { 00:19:56.718 "name": "ftl0", 00:19:56.718 "aliases": [ 00:19:56.718 "99ac4dbc-ee1e-4f83-8e4f-cb9a50c90e54" 00:19:56.718 ], 00:19:56.718 "product_name": "FTL disk", 00:19:56.718 "block_size": 4096, 00:19:56.718 "num_blocks": 20971520, 00:19:56.718 "uuid": "99ac4dbc-ee1e-4f83-8e4f-cb9a50c90e54", 00:19:56.718 "assigned_rate_limits": { 00:19:56.718 "rw_ios_per_sec": 0, 00:19:56.718 "rw_mbytes_per_sec": 0, 00:19:56.718 "r_mbytes_per_sec": 0, 00:19:56.718 "w_mbytes_per_sec": 0 00:19:56.718 }, 00:19:56.718 "claimed": false, 00:19:56.718 "zoned": false, 00:19:56.718 "supported_io_types": { 00:19:56.718 "read": true, 00:19:56.718 "write": true, 00:19:56.718 "unmap": true, 00:19:56.718 "flush": true, 00:19:56.718 "reset": false, 00:19:56.718 "nvme_admin": false, 00:19:56.718 "nvme_io": false, 00:19:56.718 "nvme_io_md": false, 00:19:56.718 "write_zeroes": true, 00:19:56.718 "zcopy": false, 00:19:56.718 "get_zone_info": false, 00:19:56.718 "zone_management": false, 00:19:56.718 "zone_append": false, 00:19:56.718 "compare": false, 00:19:56.718 "compare_and_write": false, 00:19:56.718 "abort": false, 00:19:56.718 "seek_hole": false, 00:19:56.718 "seek_data": false, 00:19:56.718 "copy": false, 00:19:56.718 "nvme_iov_md": false 00:19:56.718 }, 00:19:56.718 "driver_specific": { 00:19:56.718 "ftl": { 00:19:56.718 "base_bdev": "9d36c186-f474-4a59-a336-6584e2bcd595", 00:19:56.718 "cache": "nvc0n1p0" 00:19:56.718 } 00:19:56.718 } 00:19:56.718 } 00:19:56.718 ] 00:19:56.718 16:07:54 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:56.718 16:07:54 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:56.718 16:07:54 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:56.718 16:07:54 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:56.718 16:07:54 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:56.977 [2024-11-20 16:07:55.117330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.117377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.977 [2024-11-20 16:07:55.117390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.977 [2024-11-20 16:07:55.117400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.117433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.977 [2024-11-20 16:07:55.120029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.120076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.977 [2024-11-20 16:07:55.120090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:19:56.977 [2024-11-20 16:07:55.120098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.120498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.120517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.977 [2024-11-20 16:07:55.120528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:56.977 [2024-11-20 16:07:55.120535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.123778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.123800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.977 [2024-11-20 16:07:55.123811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:19:56.977 [2024-11-20 16:07:55.123820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.130020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.130047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.977 [2024-11-20 16:07:55.130057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:19:56.977 [2024-11-20 16:07:55.130066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.153296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.153333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.977 [2024-11-20 16:07:55.153348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.150 ms 00:19:56.977 [2024-11-20 16:07:55.153356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.167773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.167809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.977 [2024-11-20 16:07:55.167826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.358 ms 00:19:56.977 [2024-11-20 16:07:55.167833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.168030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.168060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.977 [2024-11-20 16:07:55.168071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:56.977 [2024-11-20 16:07:55.168078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.191153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.191188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.977 [2024-11-20 16:07:55.191201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.044 ms 00:19:56.977 [2024-11-20 16:07:55.191209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.977 [2024-11-20 16:07:55.213627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.977 [2024-11-20 16:07:55.213662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.977 [2024-11-20 16:07:55.213676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.373 ms 00:19:56.977 [2024-11-20 16:07:55.213683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.237 [2024-11-20 16:07:55.235694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.237 [2024-11-20 16:07:55.235742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.237 [2024-11-20 16:07:55.235755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.966 ms 00:19:57.237 [2024-11-20 16:07:55.235762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.237 [2024-11-20 16:07:55.258377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.237 [2024-11-20 16:07:55.258409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.237 [2024-11-20 16:07:55.258422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.527 ms 00:19:57.237 [2024-11-20 16:07:55.258429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.237 [2024-11-20 16:07:55.258470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.237 [2024-11-20 16:07:55.258484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.258999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.237 [2024-11-20 16:07:55.259058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.238 [2024-11-20 16:07:55.259372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.238 [2024-11-20 16:07:55.259381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99ac4dbc-ee1e-4f83-8e4f-cb9a50c90e54 00:19:57.238 [2024-11-20 16:07:55.259388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.238 [2024-11-20 16:07:55.259399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.238 [2024-11-20 16:07:55.259406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.238 [2024-11-20 16:07:55.259417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.238 [2024-11-20 16:07:55.259424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.238 [2024-11-20 16:07:55.259433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.238 [2024-11-20 16:07:55.259440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.238 [2024-11-20 16:07:55.259449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.238 [2024-11-20 16:07:55.259456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.238 [2024-11-20 16:07:55.259464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.238 [2024-11-20 16:07:55.259472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.238 [2024-11-20 16:07:55.259481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:19:57.238 [2024-11-20 16:07:55.259489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.271958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.238 [2024-11-20 16:07:55.271991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.238 [2024-11-20 16:07:55.272003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.432 ms 00:19:57.238 [2024-11-20 16:07:55.272012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.272371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.238 [2024-11-20 16:07:55.272385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.238 [2024-11-20 16:07:55.272395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:19:57.238 [2024-11-20 16:07:55.272402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.316013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.316057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.238 [2024-11-20 16:07:55.316069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.316077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.316141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.316149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.238 [2024-11-20 16:07:55.316159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.316166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.316263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.316275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.238 [2024-11-20 16:07:55.316284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.316291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.316319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.316327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.238 [2024-11-20 16:07:55.316336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.316343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.397540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.397737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.238 [2024-11-20 16:07:55.397759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.397767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.460484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.460526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.238 [2024-11-20 16:07:55.460538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.460546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.460623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.460632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.238 [2024-11-20 16:07:55.460644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.460652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.460751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.460762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.238 [2024-11-20 16:07:55.460772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.460779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.460881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.460891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.238 [2024-11-20 16:07:55.460901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.460910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.460958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.460967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.238 [2024-11-20 16:07:55.460976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.460984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.461024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.238 [2024-11-20 16:07:55.461032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.238 [2024-11-20 16:07:55.461041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.238 [2024-11-20 16:07:55.461048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.238 [2024-11-20 16:07:55.461099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.239 [2024-11-20 16:07:55.461108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.239 [2024-11-20 16:07:55.461117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.239 [2024-11-20 16:07:55.461124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.239 [2024-11-20 16:07:55.461270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.917 ms, result 0 00:19:57.239 true 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75137 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75137 ']' 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75137 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75137 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.497 killing process with pid 75137 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75137' 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75137 00:19:57.497 16:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75137 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.147 16:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:04.147 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:04.147 fio-3.35 00:20:04.147 Starting 1 thread 00:20:09.431 00:20:09.431 test: (groupid=0, jobs=1): err= 0: pid=75327: Wed Nov 20 16:08:07 2024 00:20:09.431 read: IOPS=817, BW=54.3MiB/s (56.9MB/s)(255MiB/4691msec) 00:20:09.431 slat (nsec): min=3868, max=26166, avg=4972.34, stdev=2143.51 00:20:09.431 clat (usec): min=247, max=1432, avg=554.93, stdev=245.63 00:20:09.431 lat (usec): min=251, max=1437, avg=559.90, stdev=245.77 00:20:09.431 clat percentiles (usec): 00:20:09.431 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:20:09.431 | 30.00th=[ 379], 40.00th=[ 449], 50.00th=[ 510], 60.00th=[ 529], 00:20:09.431 | 70.00th=[ 594], 80.00th=[ 758], 90.00th=[ 979], 95.00th=[ 1057], 00:20:09.431 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1401], 99.95th=[ 1434], 00:20:09.431 | 99.99th=[ 1434] 00:20:09.431 write: IOPS=822, BW=54.6MiB/s (57.3MB/s)(256MiB/4687msec); 0 zone resets 00:20:09.431 slat (nsec): min=17345, max=77163, avg=19544.92, stdev=3368.19 00:20:09.431 clat (usec): min=279, max=2334, avg=628.00, stdev=277.11 00:20:09.431 lat (usec): min=298, max=2353, avg=647.54, stdev=277.02 00:20:09.431 clat percentiles (usec): 00:20:09.431 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 355], 00:20:09.431 | 30.00th=[ 461], 40.00th=[ 498], 50.00th=[ 562], 60.00th=[ 611], 00:20:09.431 | 70.00th=[ 676], 80.00th=[ 914], 90.00th=[ 1057], 95.00th=[ 1123], 00:20:09.431 | 99.00th=[ 1319], 99.50th=[ 1811], 99.90th=[ 2212], 99.95th=[ 2311], 00:20:09.431 | 99.99th=[ 2343] 00:20:09.431 bw ( KiB/s): min=38488, max=65960, per=99.94%, avg=55911.11, stdev=9784.78, samples=9 00:20:09.431 iops : min= 566, max= 970, avg=822.22, stdev=143.89, samples=9 00:20:09.431 lat (usec) : 250=0.05%, 500=44.40%, 750=32.87%, 1000=11.65% 00:20:09.431 lat (msec) : 2=10.92%, 4=0.10% 00:20:09.431 cpu : usr=99.30%, sys=0.09%, ctx=8, majf=0, minf=1169 00:20:09.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.431 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.431 00:20:09.431 Run status group 0 (all jobs): 00:20:09.432 READ: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=255MiB (267MB), run=4691-4691msec 00:20:09.432 WRITE: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=256MiB (269MB), run=4687-4687msec 00:20:11.345 ----------------------------------------------------- 00:20:11.345 Suppressions used: 00:20:11.345 count bytes template 00:20:11.345 1 5 /usr/src/fio/parse.c 00:20:11.345 1 8 libtcmalloc_minimal.so 00:20:11.345 1 904 libcrypto.so 00:20:11.345 ----------------------------------------------------- 00:20:11.345 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.345 16:08:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:11.606 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:11.606 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:11.606 fio-3.35 00:20:11.606 Starting 2 threads 00:20:43.721 00:20:43.721 first_half: (groupid=0, jobs=1): err= 0: pid=75441: Wed Nov 20 16:08:40 2024 00:20:43.721 read: IOPS=2193, BW=8775KiB/s (8985kB/s)(256MiB/29850msec) 00:20:43.721 slat (nsec): min=4019, max=30340, avg=4804.68, stdev=982.84 00:20:43.721 clat (msec): min=10, max=327, avg=48.32, stdev=38.39 00:20:43.721 lat (msec): min=10, max=327, avg=48.32, stdev=38.39 00:20:43.721 clat percentiles (msec): 00:20:43.721 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:20:43.721 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 39], 00:20:43.721 | 70.00th=[ 44], 80.00th=[ 53], 90.00th=[ 62], 95.00th=[ 120], 00:20:43.721 | 99.00th=[ 251], 99.50th=[ 279], 99.90th=[ 309], 99.95th=[ 313], 00:20:43.721 | 99.99th=[ 321] 00:20:43.721 write: IOPS=2205, BW=8822KiB/s (9034kB/s)(256MiB/29714msec); 0 zone resets 00:20:43.721 slat (usec): min=4, max=2794, avg= 6.55, stdev=19.63 00:20:43.721 clat (usec): min=449, max=48552, avg=9981.14, stdev=5177.01 00:20:43.721 lat (usec): min=460, max=48558, avg=9987.70, stdev=5177.28 00:20:43.721 clat percentiles (usec): 00:20:43.721 | 1.00th=[ 1778], 5.00th=[ 3130], 10.00th=[ 4490], 20.00th=[ 6128], 00:20:43.721 | 30.00th=[ 7308], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10290], 00:20:43.721 | 70.00th=[11469], 80.00th=[13042], 90.00th=[15926], 95.00th=[18744], 00:20:43.721 | 99.00th=[27657], 99.50th=[34866], 99.90th=[45351], 99.95th=[45876], 00:20:43.721 | 99.99th=[46924] 00:20:43.721 bw ( KiB/s): min= 8, max=40880, per=100.00%, avg=20017.23, stdev=13701.34, samples=26 00:20:43.721 iops : min= 2, max=10220, avg=5004.31, stdev=3425.33, samples=26 00:20:43.721 lat (usec) : 500=0.01%, 1000=0.02% 00:20:43.721 lat (msec) : 2=0.74%, 4=3.22%, 10=24.82%, 20=19.26%, 50=40.29% 00:20:43.721 lat (msec) : 100=8.71%, 250=2.42%, 500=0.52% 00:20:43.721 cpu : usr=99.36%, sys=0.14%, ctx=52, majf=0, minf=5597 00:20:43.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:43.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.721 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:43.721 issued rwts: total=65480,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:43.721 second_half: (groupid=0, jobs=1): err= 0: pid=75442: Wed Nov 20 16:08:40 2024 00:20:43.721 read: IOPS=2173, BW=8693KiB/s (8901kB/s)(256MiB/30131msec) 00:20:43.721 slat (nsec): min=4015, max=26388, avg=4831.28, stdev=975.54 00:20:43.721 clat (usec): min=1105, max=421359, avg=47792.75, stdev=43665.85 00:20:43.721 lat (usec): min=1113, max=421364, avg=47797.58, stdev=43665.95 00:20:43.721 clat percentiles (msec): 00:20:43.721 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:20:43.721 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 39], 00:20:43.721 | 70.00th=[ 43], 80.00th=[ 51], 90.00th=[ 58], 95.00th=[ 125], 00:20:43.721 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 384], 00:20:43.721 | 99.99th=[ 414] 00:20:43.721 write: IOPS=2178, BW=8715KiB/s (8924kB/s)(256MiB/30080msec); 0 zone resets 00:20:43.721 slat (usec): min=4, max=696, avg= 6.41, stdev= 5.56 00:20:43.721 clat (usec): min=659, max=77777, avg=11053.74, stdev=10070.30 00:20:43.721 lat (usec): min=664, max=77782, avg=11060.15, stdev=10070.32 00:20:43.721 clat percentiles (usec): 00:20:43.721 | 1.00th=[ 1336], 5.00th=[ 1958], 10.00th=[ 2671], 20.00th=[ 4883], 00:20:43.721 | 30.00th=[ 6587], 40.00th=[ 7963], 50.00th=[ 9110], 60.00th=[10290], 00:20:43.721 | 70.00th=[11731], 80.00th=[13566], 90.00th=[17171], 95.00th=[33162], 00:20:43.721 | 99.00th=[56361], 99.50th=[60556], 99.90th=[70779], 99.95th=[73925], 00:20:43.721 | 99.99th=[74974] 00:20:43.721 bw ( KiB/s): min= 952, max=39920, per=100.00%, avg=18001.93, stdev=14219.71, samples=29 00:20:43.721 iops : min= 238, max= 9980, avg=4500.48, stdev=3554.93, samples=29 00:20:43.721 lat (usec) : 750=0.01%, 1000=0.10% 00:20:43.721 lat (msec) : 2=2.56%, 4=5.36%, 10=21.25%, 20=18.77%, 50=40.56% 00:20:43.721 lat (msec) : 100=8.50%, 250=2.03%, 500=0.86% 00:20:43.721 cpu : usr=99.27%, sys=0.10%, ctx=43, majf=0, minf=5530 00:20:43.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:43.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.721 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:43.721 issued rwts: total=65480,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:43.721 00:20:43.721 Run status group 0 (all jobs): 00:20:43.721 READ: bw=17.0MiB/s (17.8MB/s), 8693KiB/s-8775KiB/s (8901kB/s-8985kB/s), io=512MiB (536MB), run=29850-30131msec 00:20:43.721 WRITE: bw=17.0MiB/s (17.8MB/s), 8715KiB/s-8822KiB/s (8924kB/s-9034kB/s), io=512MiB (537MB), run=29714-30080msec 00:20:45.636 ----------------------------------------------------- 00:20:45.636 Suppressions used: 00:20:45.636 count bytes template 00:20:45.636 2 10 /usr/src/fio/parse.c 00:20:45.636 4 384 /usr/src/fio/iolog.c 00:20:45.636 1 8 libtcmalloc_minimal.so 00:20:45.636 1 904 libcrypto.so 00:20:45.636 ----------------------------------------------------- 00:20:45.636 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:45.636 16:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.636 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:45.636 fio-3.35 00:20:45.636 Starting 1 thread 00:21:07.595 00:21:07.595 test: (groupid=0, jobs=1): err= 0: pid=75816: Wed Nov 20 16:09:02 2024 00:21:07.595 read: IOPS=6560, BW=25.6MiB/s (26.9MB/s)(255MiB/9938msec) 00:21:07.595 slat (nsec): min=4052, max=27571, avg=4689.40, stdev=885.49 00:21:07.595 clat (usec): min=653, max=39002, avg=19501.60, stdev=2855.47 00:21:07.595 lat (usec): min=659, max=39007, avg=19506.29, stdev=2855.48 00:21:07.595 clat percentiles (usec): 00:21:07.595 | 1.00th=[14353], 5.00th=[15139], 10.00th=[15664], 20.00th=[17171], 00:21:07.595 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19530], 60.00th=[20055], 00:21:07.595 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22938], 95.00th=[24511], 00:21:07.595 | 99.00th=[27395], 99.50th=[29230], 99.90th=[32637], 99.95th=[34341], 00:21:07.595 | 99.99th=[39060] 00:21:07.595 write: IOPS=8954, BW=35.0MiB/s (36.7MB/s)(256MiB/7319msec); 0 zone resets 00:21:07.595 slat (usec): min=4, max=1251, avg= 7.07, stdev= 7.89 00:21:07.595 clat (usec): min=579, max=80678, avg=14232.10, stdev=16735.23 00:21:07.595 lat (usec): min=586, max=80684, avg=14239.17, stdev=16735.20 00:21:07.595 clat percentiles (usec): 00:21:07.595 | 1.00th=[ 1045], 5.00th=[ 1352], 10.00th=[ 1582], 20.00th=[ 1942], 00:21:07.595 | 30.00th=[ 2409], 40.00th=[ 3458], 50.00th=[ 9634], 60.00th=[11863], 00:21:07.595 | 70.00th=[13960], 80.00th=[17171], 90.00th=[48497], 95.00th=[53216], 00:21:07.595 | 99.00th=[59507], 99.50th=[61604], 99.90th=[67634], 99.95th=[69731], 00:21:07.595 | 99.99th=[74974] 00:21:07.595 bw ( KiB/s): min=19248, max=47200, per=97.57%, avg=34948.20, stdev=6395.15, samples=15 00:21:07.595 iops : min= 4812, max=11800, avg=8737.00, stdev=1598.81, samples=15 00:21:07.595 lat (usec) : 750=0.04%, 1000=0.33% 00:21:07.595 lat (msec) : 2=10.40%, 4=9.83%, 10=5.16%, 20=46.01%, 50=24.03% 00:21:07.595 lat (msec) : 100=4.20% 00:21:07.595 cpu : usr=99.04%, sys=0.18%, ctx=44, majf=0, minf=5565 00:21:07.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:07.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.595 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.595 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.595 00:21:07.595 Run status group 0 (all jobs): 00:21:07.595 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=255MiB (267MB), run=9938-9938msec 00:21:07.595 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=256MiB (268MB), run=7319-7319msec 00:21:07.595 ----------------------------------------------------- 00:21:07.595 Suppressions used: 00:21:07.595 count bytes template 00:21:07.595 1 5 /usr/src/fio/parse.c 00:21:07.595 2 192 /usr/src/fio/iolog.c 00:21:07.595 1 8 libtcmalloc_minimal.so 00:21:07.595 1 904 libcrypto.so 00:21:07.595 ----------------------------------------------------- 00:21:07.595 00:21:07.595 16:09:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:07.595 16:09:03 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.595 16:09:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.596 Remove shared memory files 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57165 /dev/shm/spdk_tgt_trace.pid74063 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:07.596 ************************************ 00:21:07.596 END TEST ftl_fio_basic 00:21:07.596 ************************************ 00:21:07.596 00:21:07.596 real 1m16.336s 00:21:07.596 user 2m40.632s 00:21:07.596 sys 0m13.259s 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.596 16:09:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:07.596 16:09:03 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:07.596 16:09:03 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:07.596 16:09:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.596 16:09:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:07.596 ************************************ 00:21:07.596 START TEST ftl_bdevperf 00:21:07.596 ************************************ 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:07.596 * Looking for test storage... 00:21:07.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.596 --rc genhtml_branch_coverage=1 00:21:07.596 --rc genhtml_function_coverage=1 00:21:07.596 --rc genhtml_legend=1 00:21:07.596 --rc geninfo_all_blocks=1 00:21:07.596 --rc geninfo_unexecuted_blocks=1 00:21:07.596 00:21:07.596 ' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.596 --rc genhtml_branch_coverage=1 00:21:07.596 --rc genhtml_function_coverage=1 00:21:07.596 --rc genhtml_legend=1 00:21:07.596 --rc geninfo_all_blocks=1 00:21:07.596 --rc geninfo_unexecuted_blocks=1 00:21:07.596 00:21:07.596 ' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.596 --rc genhtml_branch_coverage=1 00:21:07.596 --rc genhtml_function_coverage=1 00:21:07.596 --rc genhtml_legend=1 00:21:07.596 --rc geninfo_all_blocks=1 00:21:07.596 --rc geninfo_unexecuted_blocks=1 00:21:07.596 00:21:07.596 ' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.596 --rc genhtml_branch_coverage=1 00:21:07.596 --rc genhtml_function_coverage=1 00:21:07.596 --rc genhtml_legend=1 00:21:07.596 --rc geninfo_all_blocks=1 00:21:07.596 --rc geninfo_unexecuted_blocks=1 00:21:07.596 00:21:07.596 ' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76087 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76087 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76087 ']' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.596 16:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.596 [2024-11-20 16:09:03.888432] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:07.596 [2024-11-20 16:09:03.888753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76087 ] 00:21:07.596 [2024-11-20 16:09:04.047865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.596 [2024-11-20 16:09:04.175406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:07.596 16:09:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.596 { 00:21:07.596 "name": "nvme0n1", 00:21:07.596 "aliases": [ 00:21:07.596 "0f3c3e8b-8fdc-4897-a1a3-13002ea550f3" 00:21:07.596 ], 00:21:07.596 "product_name": "NVMe disk", 00:21:07.596 "block_size": 4096, 00:21:07.596 "num_blocks": 1310720, 00:21:07.596 "uuid": "0f3c3e8b-8fdc-4897-a1a3-13002ea550f3", 00:21:07.596 "numa_id": -1, 00:21:07.596 "assigned_rate_limits": { 00:21:07.596 "rw_ios_per_sec": 0, 00:21:07.596 "rw_mbytes_per_sec": 0, 00:21:07.596 "r_mbytes_per_sec": 0, 00:21:07.596 "w_mbytes_per_sec": 0 00:21:07.596 }, 00:21:07.596 "claimed": true, 00:21:07.596 "claim_type": "read_many_write_one", 00:21:07.596 "zoned": false, 00:21:07.596 "supported_io_types": { 00:21:07.596 "read": true, 00:21:07.596 "write": true, 00:21:07.596 "unmap": true, 00:21:07.596 "flush": true, 00:21:07.596 "reset": true, 00:21:07.596 "nvme_admin": true, 00:21:07.596 "nvme_io": true, 00:21:07.596 "nvme_io_md": false, 00:21:07.596 "write_zeroes": true, 00:21:07.596 "zcopy": false, 00:21:07.596 "get_zone_info": false, 00:21:07.596 "zone_management": false, 00:21:07.596 "zone_append": false, 00:21:07.596 "compare": true, 00:21:07.596 "compare_and_write": false, 00:21:07.596 "abort": true, 00:21:07.596 "seek_hole": false, 00:21:07.596 "seek_data": false, 00:21:07.596 "copy": true, 00:21:07.596 "nvme_iov_md": false 00:21:07.596 }, 00:21:07.596 "driver_specific": { 00:21:07.596 "nvme": [ 00:21:07.596 { 00:21:07.596 "pci_address": "0000:00:11.0", 00:21:07.596 "trid": { 00:21:07.596 "trtype": "PCIe", 00:21:07.596 "traddr": "0000:00:11.0" 00:21:07.596 }, 00:21:07.596 "ctrlr_data": { 00:21:07.596 "cntlid": 0, 00:21:07.596 "vendor_id": "0x1b36", 00:21:07.596 "model_number": "QEMU NVMe Ctrl", 00:21:07.596 "serial_number": "12341", 00:21:07.596 "firmware_revision": "8.0.0", 00:21:07.596 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:07.596 "oacs": { 00:21:07.596 "security": 0, 00:21:07.596 "format": 1, 00:21:07.596 "firmware": 0, 00:21:07.596 "ns_manage": 1 00:21:07.596 }, 00:21:07.596 "multi_ctrlr": false, 00:21:07.596 "ana_reporting": false 00:21:07.596 }, 00:21:07.596 "vs": { 00:21:07.596 "nvme_version": "1.4" 00:21:07.596 }, 00:21:07.596 "ns_data": { 00:21:07.596 "id": 1, 00:21:07.596 "can_share": false 00:21:07.596 } 00:21:07.596 } 00:21:07.596 ], 00:21:07.596 "mp_policy": "active_passive" 00:21:07.596 } 00:21:07.596 } 00:21:07.596 ]' 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=6c7a3521-384e-47d1-89e7-275e5384880b 00:21:07.596 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:07.597 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c7a3521-384e-47d1-89e7-275e5384880b 00:21:07.857 16:09:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2e60020a-9f86-4b14-980c-aa35e24783b8 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2e60020a-9f86-4b14-980c-aa35e24783b8 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:08.119 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.382 { 00:21:08.382 "name": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:08.382 "aliases": [ 00:21:08.382 "lvs/nvme0n1p0" 00:21:08.382 ], 00:21:08.382 "product_name": "Logical Volume", 00:21:08.382 "block_size": 4096, 00:21:08.382 "num_blocks": 26476544, 00:21:08.382 "uuid": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:08.382 "assigned_rate_limits": { 00:21:08.382 "rw_ios_per_sec": 0, 00:21:08.382 "rw_mbytes_per_sec": 0, 00:21:08.382 "r_mbytes_per_sec": 0, 00:21:08.382 "w_mbytes_per_sec": 0 00:21:08.382 }, 00:21:08.382 "claimed": false, 00:21:08.382 "zoned": false, 00:21:08.382 "supported_io_types": { 00:21:08.382 "read": true, 00:21:08.382 "write": true, 00:21:08.382 "unmap": true, 00:21:08.382 "flush": false, 00:21:08.382 "reset": true, 00:21:08.382 "nvme_admin": false, 00:21:08.382 "nvme_io": false, 00:21:08.382 "nvme_io_md": false, 00:21:08.382 "write_zeroes": true, 00:21:08.382 "zcopy": false, 00:21:08.382 "get_zone_info": false, 00:21:08.382 "zone_management": false, 00:21:08.382 "zone_append": false, 00:21:08.382 "compare": false, 00:21:08.382 "compare_and_write": false, 00:21:08.382 "abort": false, 00:21:08.382 "seek_hole": true, 00:21:08.382 "seek_data": true, 00:21:08.382 "copy": false, 00:21:08.382 "nvme_iov_md": false 00:21:08.382 }, 00:21:08.382 "driver_specific": { 00:21:08.382 "lvol": { 00:21:08.382 "lvol_store_uuid": "2e60020a-9f86-4b14-980c-aa35e24783b8", 00:21:08.382 "base_bdev": "nvme0n1", 00:21:08.382 "thin_provision": true, 00:21:08.382 "num_allocated_clusters": 0, 00:21:08.382 "snapshot": false, 00:21:08.382 "clone": false, 00:21:08.382 "esnap_clone": false 00:21:08.382 } 00:21:08.382 } 00:21:08.382 } 00:21:08.382 ]' 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:08.382 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:08.955 16:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.955 { 00:21:08.955 "name": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:08.955 "aliases": [ 00:21:08.955 "lvs/nvme0n1p0" 00:21:08.955 ], 00:21:08.955 "product_name": "Logical Volume", 00:21:08.955 "block_size": 4096, 00:21:08.955 "num_blocks": 26476544, 00:21:08.955 "uuid": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:08.955 "assigned_rate_limits": { 00:21:08.955 "rw_ios_per_sec": 0, 00:21:08.955 "rw_mbytes_per_sec": 0, 00:21:08.955 "r_mbytes_per_sec": 0, 00:21:08.955 "w_mbytes_per_sec": 0 00:21:08.955 }, 00:21:08.955 "claimed": false, 00:21:08.955 "zoned": false, 00:21:08.955 "supported_io_types": { 00:21:08.955 "read": true, 00:21:08.955 "write": true, 00:21:08.955 "unmap": true, 00:21:08.955 "flush": false, 00:21:08.955 "reset": true, 00:21:08.955 "nvme_admin": false, 00:21:08.955 "nvme_io": false, 00:21:08.955 "nvme_io_md": false, 00:21:08.955 "write_zeroes": true, 00:21:08.955 "zcopy": false, 00:21:08.955 "get_zone_info": false, 00:21:08.955 "zone_management": false, 00:21:08.955 "zone_append": false, 00:21:08.955 "compare": false, 00:21:08.955 "compare_and_write": false, 00:21:08.955 "abort": false, 00:21:08.955 "seek_hole": true, 00:21:08.955 "seek_data": true, 00:21:08.955 "copy": false, 00:21:08.955 "nvme_iov_md": false 00:21:08.955 }, 00:21:08.955 "driver_specific": { 00:21:08.955 "lvol": { 00:21:08.955 "lvol_store_uuid": "2e60020a-9f86-4b14-980c-aa35e24783b8", 00:21:08.955 "base_bdev": "nvme0n1", 00:21:08.955 "thin_provision": true, 00:21:08.955 "num_allocated_clusters": 0, 00:21:08.955 "snapshot": false, 00:21:08.955 "clone": false, 00:21:08.955 "esnap_clone": false 00:21:08.955 } 00:21:08.955 } 00:21:08.955 } 00:21:08.955 ]' 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:08.955 16:09:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:08.956 16:09:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:09.216 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb776c32-daaa-475d-bd1f-d3f75122b09e 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:09.477 { 00:21:09.477 "name": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:09.477 "aliases": [ 00:21:09.477 "lvs/nvme0n1p0" 00:21:09.477 ], 00:21:09.477 "product_name": "Logical Volume", 00:21:09.477 "block_size": 4096, 00:21:09.477 "num_blocks": 26476544, 00:21:09.477 "uuid": "fb776c32-daaa-475d-bd1f-d3f75122b09e", 00:21:09.477 "assigned_rate_limits": { 00:21:09.477 "rw_ios_per_sec": 0, 00:21:09.477 "rw_mbytes_per_sec": 0, 00:21:09.477 "r_mbytes_per_sec": 0, 00:21:09.477 "w_mbytes_per_sec": 0 00:21:09.477 }, 00:21:09.477 "claimed": false, 00:21:09.477 "zoned": false, 00:21:09.477 "supported_io_types": { 00:21:09.477 "read": true, 00:21:09.477 "write": true, 00:21:09.477 "unmap": true, 00:21:09.477 "flush": false, 00:21:09.477 "reset": true, 00:21:09.477 "nvme_admin": false, 00:21:09.477 "nvme_io": false, 00:21:09.477 "nvme_io_md": false, 00:21:09.477 "write_zeroes": true, 00:21:09.477 "zcopy": false, 00:21:09.477 "get_zone_info": false, 00:21:09.477 "zone_management": false, 00:21:09.477 "zone_append": false, 00:21:09.477 "compare": false, 00:21:09.477 "compare_and_write": false, 00:21:09.477 "abort": false, 00:21:09.477 "seek_hole": true, 00:21:09.477 "seek_data": true, 00:21:09.477 "copy": false, 00:21:09.477 "nvme_iov_md": false 00:21:09.477 }, 00:21:09.477 "driver_specific": { 00:21:09.477 "lvol": { 00:21:09.477 "lvol_store_uuid": "2e60020a-9f86-4b14-980c-aa35e24783b8", 00:21:09.477 "base_bdev": "nvme0n1", 00:21:09.477 "thin_provision": true, 00:21:09.477 "num_allocated_clusters": 0, 00:21:09.477 "snapshot": false, 00:21:09.477 "clone": false, 00:21:09.477 "esnap_clone": false 00:21:09.477 } 00:21:09.477 } 00:21:09.477 } 00:21:09.477 ]' 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:09.477 16:09:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb776c32-daaa-475d-bd1f-d3f75122b09e -c nvc0n1p0 --l2p_dram_limit 20 00:21:09.739 [2024-11-20 16:09:07.903912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.739 [2024-11-20 16:09:07.904107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.739 [2024-11-20 16:09:07.904129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:09.739 [2024-11-20 16:09:07.904139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.739 [2024-11-20 16:09:07.904203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.739 [2024-11-20 16:09:07.904215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.739 [2024-11-20 16:09:07.904224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:09.739 [2024-11-20 16:09:07.904233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.739 [2024-11-20 16:09:07.904251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.739 [2024-11-20 16:09:07.904985] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.739 [2024-11-20 16:09:07.905013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.739 [2024-11-20 16:09:07.905023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.739 [2024-11-20 16:09:07.905032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:21:09.739 [2024-11-20 16:09:07.905041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.739 [2024-11-20 16:09:07.905102] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ca0589ef-3f31-41ed-8494-085a744f2a66 00:21:09.739 [2024-11-20 16:09:07.906277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.739 [2024-11-20 16:09:07.906309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:09.739 [2024-11-20 16:09:07.906324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:09.739 [2024-11-20 16:09:07.906332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.739 [2024-11-20 16:09:07.911871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.739 [2024-11-20 16:09:07.911905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.740 [2024-11-20 16:09:07.911918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.500 ms 00:21:09.740 [2024-11-20 16:09:07.911927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.912020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.912029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.740 [2024-11-20 16:09:07.912042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:09.740 [2024-11-20 16:09:07.912049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.912104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.912114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.740 [2024-11-20 16:09:07.912123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:09.740 [2024-11-20 16:09:07.912130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.912153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.740 [2024-11-20 16:09:07.915782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.915817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.740 [2024-11-20 16:09:07.915827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:21:09.740 [2024-11-20 16:09:07.915840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.915873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.915883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.740 [2024-11-20 16:09:07.915891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:09.740 [2024-11-20 16:09:07.915900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.915928] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:09.740 [2024-11-20 16:09:07.916070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.740 [2024-11-20 16:09:07.916082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.740 [2024-11-20 16:09:07.916095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:09.740 [2024-11-20 16:09:07.916105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916115] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:09.740 [2024-11-20 16:09:07.916132] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.740 [2024-11-20 16:09:07.916139] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.740 [2024-11-20 16:09:07.916147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.740 [2024-11-20 16:09:07.916156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.916165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.740 [2024-11-20 16:09:07.916173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:09.740 [2024-11-20 16:09:07.916182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.916262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.740 [2024-11-20 16:09:07.916273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.740 [2024-11-20 16:09:07.916280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:09.740 [2024-11-20 16:09:07.916290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.740 [2024-11-20 16:09:07.916397] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.740 [2024-11-20 16:09:07.916411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.740 [2024-11-20 16:09:07.916418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.740 [2024-11-20 16:09:07.916443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.740 [2024-11-20 16:09:07.916464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.740 [2024-11-20 16:09:07.916478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.740 [2024-11-20 16:09:07.916488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:09.740 [2024-11-20 16:09:07.916494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.740 [2024-11-20 16:09:07.916509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.740 [2024-11-20 16:09:07.916516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:09.740 [2024-11-20 16:09:07.916525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.740 [2024-11-20 16:09:07.916540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.740 [2024-11-20 16:09:07.916562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.740 [2024-11-20 16:09:07.916584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.740 [2024-11-20 16:09:07.916605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.740 [2024-11-20 16:09:07.916628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.740 [2024-11-20 16:09:07.916652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.740 [2024-11-20 16:09:07.916667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.740 [2024-11-20 16:09:07.916675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:09.740 [2024-11-20 16:09:07.916682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.740 [2024-11-20 16:09:07.916690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.740 [2024-11-20 16:09:07.916697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:09.740 [2024-11-20 16:09:07.916705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.740 [2024-11-20 16:09:07.916719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:09.740 [2024-11-20 16:09:07.916743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916751] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.740 [2024-11-20 16:09:07.916759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.740 [2024-11-20 16:09:07.916768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.740 [2024-11-20 16:09:07.916788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.740 [2024-11-20 16:09:07.916795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.740 [2024-11-20 16:09:07.916803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.740 [2024-11-20 16:09:07.916810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.740 [2024-11-20 16:09:07.916817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.740 [2024-11-20 16:09:07.916824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.740 [2024-11-20 16:09:07.916836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.740 [2024-11-20 16:09:07.916845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.740 [2024-11-20 16:09:07.916855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:09.740 [2024-11-20 16:09:07.916862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:09.740 [2024-11-20 16:09:07.916871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:09.740 [2024-11-20 16:09:07.916877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:09.740 [2024-11-20 16:09:07.916886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:09.740 [2024-11-20 16:09:07.916893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:09.740 [2024-11-20 16:09:07.916901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:09.740 [2024-11-20 16:09:07.916908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:09.741 [2024-11-20 16:09:07.916918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:09.741 [2024-11-20 16:09:07.916928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:09.741 [2024-11-20 16:09:07.916969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.741 [2024-11-20 16:09:07.916976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.741 [2024-11-20 16:09:07.916997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.741 [2024-11-20 16:09:07.917005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.741 [2024-11-20 16:09:07.917013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.741 [2024-11-20 16:09:07.917023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.741 [2024-11-20 16:09:07.917030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.741 [2024-11-20 16:09:07.917039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:21:09.741 [2024-11-20 16:09:07.917045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.741 [2024-11-20 16:09:07.917080] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:09.741 [2024-11-20 16:09:07.917089] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:13.946 [2024-11-20 16:09:11.335595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.335685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:13.946 [2024-11-20 16:09:11.335710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3418.491 ms 00:21:13.946 [2024-11-20 16:09:11.335752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.363777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.363832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.946 [2024-11-20 16:09:11.363847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.694 ms 00:21:13.946 [2024-11-20 16:09:11.363855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.364021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.364033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:13.946 [2024-11-20 16:09:11.364046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:13.946 [2024-11-20 16:09:11.364053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.406500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.406558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.946 [2024-11-20 16:09:11.406574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.409 ms 00:21:13.946 [2024-11-20 16:09:11.406582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.406634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.406644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.946 [2024-11-20 16:09:11.406653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:13.946 [2024-11-20 16:09:11.406663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.407311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.407427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.946 [2024-11-20 16:09:11.407446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:21:13.946 [2024-11-20 16:09:11.407454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.407585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.407594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.946 [2024-11-20 16:09:11.407605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:13.946 [2024-11-20 16:09:11.407612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.421004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.421053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.946 [2024-11-20 16:09:11.421066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.371 ms 00:21:13.946 [2024-11-20 16:09:11.421076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.432665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:13.946 [2024-11-20 16:09:11.437868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.437917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:13.946 [2024-11-20 16:09:11.437930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.697 ms 00:21:13.946 [2024-11-20 16:09:11.437941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.526156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.526382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:13.946 [2024-11-20 16:09:11.526404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.173 ms 00:21:13.946 [2024-11-20 16:09:11.526414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.527019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.527069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:13.946 [2024-11-20 16:09:11.527115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:13.946 [2024-11-20 16:09:11.527131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.560774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.561027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:13.946 [2024-11-20 16:09:11.561048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.521 ms 00:21:13.946 [2024-11-20 16:09:11.561058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.586740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.946 [2024-11-20 16:09:11.586802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:13.946 [2024-11-20 16:09:11.586816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.617 ms 00:21:13.946 [2024-11-20 16:09:11.586826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.946 [2024-11-20 16:09:11.587433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.587446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:13.947 [2024-11-20 16:09:11.587455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:21:13.947 [2024-11-20 16:09:11.587464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.665432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.665496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:13.947 [2024-11-20 16:09:11.665510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.921 ms 00:21:13.947 [2024-11-20 16:09:11.665520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.691791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.692032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:13.947 [2024-11-20 16:09:11.692056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.162 ms 00:21:13.947 [2024-11-20 16:09:11.692065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.718955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.719177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:13.947 [2024-11-20 16:09:11.719196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.791 ms 00:21:13.947 [2024-11-20 16:09:11.719205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.745857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.746100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:13.947 [2024-11-20 16:09:11.746118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.558 ms 00:21:13.947 [2024-11-20 16:09:11.746127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.746177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.746190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:13.947 [2024-11-20 16:09:11.746198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:13.947 [2024-11-20 16:09:11.746207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.746312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.947 [2024-11-20 16:09:11.746325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:13.947 [2024-11-20 16:09:11.746334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:13.947 [2024-11-20 16:09:11.746344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.947 [2024-11-20 16:09:11.747301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3842.910 ms, result 0 00:21:13.947 { 00:21:13.947 "name": "ftl0", 00:21:13.947 "uuid": "ca0589ef-3f31-41ed-8494-085a744f2a66" 00:21:13.947 } 00:21:13.947 16:09:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:13.947 16:09:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:13.947 16:09:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:13.947 16:09:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:13.947 [2024-11-20 16:09:12.063505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:13.947 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:13.947 Zero copy mechanism will not be used. 00:21:13.947 Running I/O for 4 seconds... 00:21:15.826 625.00 IOPS, 41.50 MiB/s [2024-11-20T16:09:15.461Z] 619.00 IOPS, 41.11 MiB/s [2024-11-20T16:09:16.405Z] 597.67 IOPS, 39.69 MiB/s [2024-11-20T16:09:16.405Z] 652.25 IOPS, 43.31 MiB/s 00:21:18.155 Latency(us) 00:21:18.155 [2024-11-20T16:09:16.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.155 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:18.155 ftl0 : 4.00 652.19 43.31 0.00 0.00 1625.53 283.57 52832.10 00:21:18.155 [2024-11-20T16:09:16.405Z] =================================================================================================================== 00:21:18.155 [2024-11-20T16:09:16.405Z] Total : 652.19 43.31 0.00 0.00 1625.53 283.57 52832.10 00:21:18.155 [2024-11-20 16:09:16.074155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:18.155 { 00:21:18.155 "results": [ 00:21:18.155 { 00:21:18.155 "job": "ftl0", 00:21:18.155 "core_mask": "0x1", 00:21:18.156 "workload": "randwrite", 00:21:18.156 "status": "finished", 00:21:18.156 "queue_depth": 1, 00:21:18.156 "io_size": 69632, 00:21:18.156 "runtime": 4.001892, 00:21:18.156 "iops": 652.1915134141551, 00:21:18.156 "mibps": 43.309592687658736, 00:21:18.156 "io_failed": 0, 00:21:18.156 "io_timeout": 0, 00:21:18.156 "avg_latency_us": 1625.5277194223402, 00:21:18.156 "min_latency_us": 283.5692307692308, 00:21:18.156 "max_latency_us": 52832.09846153846 00:21:18.156 } 00:21:18.156 ], 00:21:18.156 "core_count": 1 00:21:18.156 } 00:21:18.156 16:09:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:18.156 [2024-11-20 16:09:16.181336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:18.156 Running I/O for 4 seconds... 00:21:20.023 7414.00 IOPS, 28.96 MiB/s [2024-11-20T16:09:19.207Z] 8875.00 IOPS, 34.67 MiB/s [2024-11-20T16:09:20.578Z] 9451.33 IOPS, 36.92 MiB/s [2024-11-20T16:09:20.578Z] 9552.75 IOPS, 37.32 MiB/s 00:21:22.328 Latency(us) 00:21:22.328 [2024-11-20T16:09:20.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.328 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:22.328 ftl0 : 4.01 9550.07 37.30 0.00 0.00 13377.25 274.12 113730.17 00:21:22.328 [2024-11-20T16:09:20.578Z] =================================================================================================================== 00:21:22.328 [2024-11-20T16:09:20.578Z] Total : 9550.07 37.30 0.00 0.00 13377.25 0.00 113730.17 00:21:22.328 { 00:21:22.328 "results": [ 00:21:22.328 { 00:21:22.328 "job": "ftl0", 00:21:22.328 "core_mask": "0x1", 00:21:22.328 "workload": "randwrite", 00:21:22.328 "status": "finished", 00:21:22.328 "queue_depth": 128, 00:21:22.328 "io_size": 4096, 00:21:22.329 "runtime": 4.014003, 00:21:22.329 "iops": 9550.067600846338, 00:21:22.329 "mibps": 37.30495156580601, 00:21:22.329 "io_failed": 0, 00:21:22.329 "io_timeout": 0, 00:21:22.329 "avg_latency_us": 13377.252337069724, 00:21:22.329 "min_latency_us": 274.11692307692306, 00:21:22.329 "max_latency_us": 113730.16615384615 00:21:22.329 } 00:21:22.329 ], 00:21:22.329 "core_count": 1 00:21:22.329 } 00:21:22.329 [2024-11-20 16:09:20.204442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:22.329 16:09:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:22.329 [2024-11-20 16:09:20.310616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:22.329 Running I/O for 4 seconds... 00:21:24.195 8534.00 IOPS, 33.34 MiB/s [2024-11-20T16:09:23.378Z] 8701.00 IOPS, 33.99 MiB/s [2024-11-20T16:09:24.320Z] 8791.33 IOPS, 34.34 MiB/s [2024-11-20T16:09:24.578Z] 8831.25 IOPS, 34.50 MiB/s 00:21:26.328 Latency(us) 00:21:26.328 [2024-11-20T16:09:24.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.328 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:26.328 Verification LBA range: start 0x0 length 0x1400000 00:21:26.328 ftl0 : 4.01 8843.39 34.54 0.00 0.00 14428.91 267.82 25811.10 00:21:26.328 [2024-11-20T16:09:24.578Z] =================================================================================================================== 00:21:26.328 [2024-11-20T16:09:24.578Z] Total : 8843.39 34.54 0.00 0.00 14428.91 0.00 25811.10 00:21:26.328 [2024-11-20 16:09:24.334210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:26.328 { 00:21:26.328 "results": [ 00:21:26.328 { 00:21:26.328 "job": "ftl0", 00:21:26.328 "core_mask": "0x1", 00:21:26.328 "workload": "verify", 00:21:26.328 "status": "finished", 00:21:26.328 "verify_range": { 00:21:26.328 "start": 0, 00:21:26.328 "length": 20971520 00:21:26.328 }, 00:21:26.328 "queue_depth": 128, 00:21:26.328 "io_size": 4096, 00:21:26.328 "runtime": 4.008981, 00:21:26.328 "iops": 8843.394368793466, 00:21:26.328 "mibps": 34.54450925309948, 00:21:26.328 "io_failed": 0, 00:21:26.328 "io_timeout": 0, 00:21:26.328 "avg_latency_us": 14428.90603368707, 00:21:26.328 "min_latency_us": 267.81538461538463, 00:21:26.328 "max_latency_us": 25811.10153846154 00:21:26.328 } 00:21:26.328 ], 00:21:26.328 "core_count": 1 00:21:26.328 } 00:21:26.328 16:09:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:26.328 [2024-11-20 16:09:24.544213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.328 [2024-11-20 16:09:24.544272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:26.328 [2024-11-20 16:09:24.544285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:26.328 [2024-11-20 16:09:24.544295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.328 [2024-11-20 16:09:24.544317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:26.328 [2024-11-20 16:09:24.546953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.328 [2024-11-20 16:09:24.546987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:26.328 [2024-11-20 16:09:24.547000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:21:26.328 [2024-11-20 16:09:24.547008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.328 [2024-11-20 16:09:24.548950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.328 [2024-11-20 16:09:24.549093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:26.328 [2024-11-20 16:09:24.549118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.916 ms 00:21:26.328 [2024-11-20 16:09:24.549127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.688579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.688804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:26.586 [2024-11-20 16:09:24.688881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 139.420 ms 00:21:26.586 [2024-11-20 16:09:24.688905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.695069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.695230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:26.586 [2024-11-20 16:09:24.695290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:21:26.586 [2024-11-20 16:09:24.695316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.719643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.719845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:26.586 [2024-11-20 16:09:24.719906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.212 ms 00:21:26.586 [2024-11-20 16:09:24.719929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.735372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.735558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:26.586 [2024-11-20 16:09:24.735614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.301 ms 00:21:26.586 [2024-11-20 16:09:24.735637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.735814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.735842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:26.586 [2024-11-20 16:09:24.735896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:26.586 [2024-11-20 16:09:24.735919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.760039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.760227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:26.586 [2024-11-20 16:09:24.760282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.085 ms 00:21:26.586 [2024-11-20 16:09:24.760304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.784428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.784619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:26.586 [2024-11-20 16:09:24.784673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:21:26.586 [2024-11-20 16:09:24.784695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.586 [2024-11-20 16:09:24.815889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.586 [2024-11-20 16:09:24.816094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:26.586 [2024-11-20 16:09:24.816157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.923 ms 00:21:26.586 [2024-11-20 16:09:24.816181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.845 [2024-11-20 16:09:24.839628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.845 [2024-11-20 16:09:24.839829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:26.845 [2024-11-20 16:09:24.839889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.339 ms 00:21:26.845 [2024-11-20 16:09:24.839912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.845 [2024-11-20 16:09:24.840018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:26.845 [2024-11-20 16:09:24.840065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:26.845 [2024-11-20 16:09:24.840381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.840971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.841987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:26.846 [2024-11-20 16:09:24.842854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:26.847 [2024-11-20 16:09:24.842913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:26.847 [2024-11-20 16:09:24.842923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ca0589ef-3f31-41ed-8494-085a744f2a66 00:21:26.847 [2024-11-20 16:09:24.842933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:26.847 [2024-11-20 16:09:24.842942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:26.847 [2024-11-20 16:09:24.842949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:26.847 [2024-11-20 16:09:24.842958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:26.847 [2024-11-20 16:09:24.842965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:26.847 [2024-11-20 16:09:24.842974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:26.847 [2024-11-20 16:09:24.842982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:26.847 [2024-11-20 16:09:24.842991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:26.847 [2024-11-20 16:09:24.842999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:26.847 [2024-11-20 16:09:24.843010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.847 [2024-11-20 16:09:24.843017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:26.847 [2024-11-20 16:09:24.843028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:21:26.847 [2024-11-20 16:09:24.843035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.855526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.847 [2024-11-20 16:09:24.855563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:26.847 [2024-11-20 16:09:24.855576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:21:26.847 [2024-11-20 16:09:24.855584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.855962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.847 [2024-11-20 16:09:24.856074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:26.847 [2024-11-20 16:09:24.856090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:21:26.847 [2024-11-20 16:09:24.856098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.890668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:24.890716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.847 [2024-11-20 16:09:24.890740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:24.890749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.890819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:24.890827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.847 [2024-11-20 16:09:24.890836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:24.890843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.890947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:24.890959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:26.847 [2024-11-20 16:09:24.890968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:24.890976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.890992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:24.891000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:26.847 [2024-11-20 16:09:24.891009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:24.891017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:24.967218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:24.967263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:26.847 [2024-11-20 16:09:24.967278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:24.967286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.030537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.030717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.847 [2024-11-20 16:09:25.030750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.030758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.030832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.030842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.847 [2024-11-20 16:09:25.030851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.030859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.030915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.030925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.847 [2024-11-20 16:09:25.030934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.030941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.031028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.031039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.847 [2024-11-20 16:09:25.031050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.031062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.031092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.031101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:26.847 [2024-11-20 16:09:25.031110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.031117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.031150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.031160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.847 [2024-11-20 16:09:25.031169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.031176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.031216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.847 [2024-11-20 16:09:25.031231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.847 [2024-11-20 16:09:25.031239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.847 [2024-11-20 16:09:25.031247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.847 [2024-11-20 16:09:25.031363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.117 ms, result 0 00:21:26.847 true 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76087 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76087 ']' 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76087 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76087 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76087' 00:21:26.847 killing process with pid 76087 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76087 00:21:26.847 Received shutdown signal, test time was about 4.000000 seconds 00:21:26.847 00:21:26.847 Latency(us) 00:21:26.847 [2024-11-20T16:09:25.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.847 [2024-11-20T16:09:25.097Z] =================================================================================================================== 00:21:26.847 [2024-11-20T16:09:25.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.847 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76087 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:27.783 Remove shared memory files 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:27.783 ************************************ 00:21:27.783 END TEST ftl_bdevperf 00:21:27.783 ************************************ 00:21:27.783 00:21:27.783 real 0m22.317s 00:21:27.783 user 0m25.282s 00:21:27.783 sys 0m0.896s 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.783 16:09:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:27.783 16:09:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:27.783 16:09:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:27.783 16:09:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.783 16:09:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:27.783 ************************************ 00:21:27.783 START TEST ftl_trim 00:21:27.783 ************************************ 00:21:27.783 16:09:25 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:28.042 * Looking for test storage... 00:21:28.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.042 16:09:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:28.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.042 --rc genhtml_branch_coverage=1 00:21:28.042 --rc genhtml_function_coverage=1 00:21:28.042 --rc genhtml_legend=1 00:21:28.042 --rc geninfo_all_blocks=1 00:21:28.042 --rc geninfo_unexecuted_blocks=1 00:21:28.042 00:21:28.042 ' 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:28.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.042 --rc genhtml_branch_coverage=1 00:21:28.042 --rc genhtml_function_coverage=1 00:21:28.042 --rc genhtml_legend=1 00:21:28.042 --rc geninfo_all_blocks=1 00:21:28.042 --rc geninfo_unexecuted_blocks=1 00:21:28.042 00:21:28.042 ' 00:21:28.042 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:28.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.042 --rc genhtml_branch_coverage=1 00:21:28.042 --rc genhtml_function_coverage=1 00:21:28.043 --rc genhtml_legend=1 00:21:28.043 --rc geninfo_all_blocks=1 00:21:28.043 --rc geninfo_unexecuted_blocks=1 00:21:28.043 00:21:28.043 ' 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:28.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.043 --rc genhtml_branch_coverage=1 00:21:28.043 --rc genhtml_function_coverage=1 00:21:28.043 --rc genhtml_legend=1 00:21:28.043 --rc geninfo_all_blocks=1 00:21:28.043 --rc geninfo_unexecuted_blocks=1 00:21:28.043 00:21:28.043 ' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76439 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76439 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76439 ']' 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.043 16:09:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:28.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.043 16:09:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:28.043 [2024-11-20 16:09:26.222887] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:28.043 [2024-11-20 16:09:26.223141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76439 ] 00:21:28.301 [2024-11-20 16:09:26.381713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.301 [2024-11-20 16:09:26.483648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.301 [2024-11-20 16:09:26.483858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.301 [2024-11-20 16:09:26.483888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.866 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.866 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:28.866 16:09:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:29.124 16:09:27 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:29.124 16:09:27 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:29.124 16:09:27 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:29.124 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:29.124 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:29.124 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:29.124 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:29.124 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:29.383 { 00:21:29.383 "name": "nvme0n1", 00:21:29.383 "aliases": [ 00:21:29.383 "62f2c3c5-fbdb-4f9d-98ab-5c1ef2fee3f9" 00:21:29.383 ], 00:21:29.383 "product_name": "NVMe disk", 00:21:29.383 "block_size": 4096, 00:21:29.383 "num_blocks": 1310720, 00:21:29.383 "uuid": "62f2c3c5-fbdb-4f9d-98ab-5c1ef2fee3f9", 00:21:29.383 "numa_id": -1, 00:21:29.383 "assigned_rate_limits": { 00:21:29.383 "rw_ios_per_sec": 0, 00:21:29.383 "rw_mbytes_per_sec": 0, 00:21:29.383 "r_mbytes_per_sec": 0, 00:21:29.383 "w_mbytes_per_sec": 0 00:21:29.383 }, 00:21:29.383 "claimed": true, 00:21:29.383 "claim_type": "read_many_write_one", 00:21:29.383 "zoned": false, 00:21:29.383 "supported_io_types": { 00:21:29.383 "read": true, 00:21:29.383 "write": true, 00:21:29.383 "unmap": true, 00:21:29.383 "flush": true, 00:21:29.383 "reset": true, 00:21:29.383 "nvme_admin": true, 00:21:29.383 "nvme_io": true, 00:21:29.383 "nvme_io_md": false, 00:21:29.383 "write_zeroes": true, 00:21:29.383 "zcopy": false, 00:21:29.383 "get_zone_info": false, 00:21:29.383 "zone_management": false, 00:21:29.383 "zone_append": false, 00:21:29.383 "compare": true, 00:21:29.383 "compare_and_write": false, 00:21:29.383 "abort": true, 00:21:29.383 "seek_hole": false, 00:21:29.383 "seek_data": false, 00:21:29.383 "copy": true, 00:21:29.383 "nvme_iov_md": false 00:21:29.383 }, 00:21:29.383 "driver_specific": { 00:21:29.383 "nvme": [ 00:21:29.383 { 00:21:29.383 "pci_address": "0000:00:11.0", 00:21:29.383 "trid": { 00:21:29.383 "trtype": "PCIe", 00:21:29.383 "traddr": "0000:00:11.0" 00:21:29.383 }, 00:21:29.383 "ctrlr_data": { 00:21:29.383 "cntlid": 0, 00:21:29.383 "vendor_id": "0x1b36", 00:21:29.383 "model_number": "QEMU NVMe Ctrl", 00:21:29.383 "serial_number": "12341", 00:21:29.383 "firmware_revision": "8.0.0", 00:21:29.383 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:29.383 "oacs": { 00:21:29.383 "security": 0, 00:21:29.383 "format": 1, 00:21:29.383 "firmware": 0, 00:21:29.383 "ns_manage": 1 00:21:29.383 }, 00:21:29.383 "multi_ctrlr": false, 00:21:29.383 "ana_reporting": false 00:21:29.383 }, 00:21:29.383 "vs": { 00:21:29.383 "nvme_version": "1.4" 00:21:29.383 }, 00:21:29.383 "ns_data": { 00:21:29.383 "id": 1, 00:21:29.383 "can_share": false 00:21:29.383 } 00:21:29.383 } 00:21:29.383 ], 00:21:29.383 "mp_policy": "active_passive" 00:21:29.383 } 00:21:29.383 } 00:21:29.383 ]' 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:29.383 16:09:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:29.383 16:09:27 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:29.383 16:09:27 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:29.383 16:09:27 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:29.383 16:09:27 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:29.383 16:09:27 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:29.642 16:09:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2e60020a-9f86-4b14-980c-aa35e24783b8 00:21:29.642 16:09:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:29.642 16:09:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e60020a-9f86-4b14-980c-aa35e24783b8 00:21:29.901 16:09:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:30.160 16:09:28 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f 00:21:30.160 16:09:28 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:30.419 16:09:28 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.419 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.419 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:30.419 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:30.419 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:30.419 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.677 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:30.677 { 00:21:30.677 "name": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:30.677 "aliases": [ 00:21:30.677 "lvs/nvme0n1p0" 00:21:30.677 ], 00:21:30.677 "product_name": "Logical Volume", 00:21:30.677 "block_size": 4096, 00:21:30.677 "num_blocks": 26476544, 00:21:30.677 "uuid": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:30.677 "assigned_rate_limits": { 00:21:30.677 "rw_ios_per_sec": 0, 00:21:30.677 "rw_mbytes_per_sec": 0, 00:21:30.677 "r_mbytes_per_sec": 0, 00:21:30.677 "w_mbytes_per_sec": 0 00:21:30.677 }, 00:21:30.677 "claimed": false, 00:21:30.677 "zoned": false, 00:21:30.677 "supported_io_types": { 00:21:30.677 "read": true, 00:21:30.677 "write": true, 00:21:30.677 "unmap": true, 00:21:30.678 "flush": false, 00:21:30.678 "reset": true, 00:21:30.678 "nvme_admin": false, 00:21:30.678 "nvme_io": false, 00:21:30.678 "nvme_io_md": false, 00:21:30.678 "write_zeroes": true, 00:21:30.678 "zcopy": false, 00:21:30.678 "get_zone_info": false, 00:21:30.678 "zone_management": false, 00:21:30.678 "zone_append": false, 00:21:30.678 "compare": false, 00:21:30.678 "compare_and_write": false, 00:21:30.678 "abort": false, 00:21:30.678 "seek_hole": true, 00:21:30.678 "seek_data": true, 00:21:30.678 "copy": false, 00:21:30.678 "nvme_iov_md": false 00:21:30.678 }, 00:21:30.678 "driver_specific": { 00:21:30.678 "lvol": { 00:21:30.678 "lvol_store_uuid": "a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f", 00:21:30.678 "base_bdev": "nvme0n1", 00:21:30.678 "thin_provision": true, 00:21:30.678 "num_allocated_clusters": 0, 00:21:30.678 "snapshot": false, 00:21:30.678 "clone": false, 00:21:30.678 "esnap_clone": false 00:21:30.678 } 00:21:30.678 } 00:21:30.678 } 00:21:30.678 ]' 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:30.678 16:09:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:30.678 16:09:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:30.678 16:09:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:30.678 16:09:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:30.936 16:09:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:30.936 16:09:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:30.936 16:09:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.936 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:30.936 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:30.936 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:30.936 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:30.936 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:31.195 { 00:21:31.195 "name": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:31.195 "aliases": [ 00:21:31.195 "lvs/nvme0n1p0" 00:21:31.195 ], 00:21:31.195 "product_name": "Logical Volume", 00:21:31.195 "block_size": 4096, 00:21:31.195 "num_blocks": 26476544, 00:21:31.195 "uuid": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:31.195 "assigned_rate_limits": { 00:21:31.195 "rw_ios_per_sec": 0, 00:21:31.195 "rw_mbytes_per_sec": 0, 00:21:31.195 "r_mbytes_per_sec": 0, 00:21:31.195 "w_mbytes_per_sec": 0 00:21:31.195 }, 00:21:31.195 "claimed": false, 00:21:31.195 "zoned": false, 00:21:31.195 "supported_io_types": { 00:21:31.195 "read": true, 00:21:31.195 "write": true, 00:21:31.195 "unmap": true, 00:21:31.195 "flush": false, 00:21:31.195 "reset": true, 00:21:31.195 "nvme_admin": false, 00:21:31.195 "nvme_io": false, 00:21:31.195 "nvme_io_md": false, 00:21:31.195 "write_zeroes": true, 00:21:31.195 "zcopy": false, 00:21:31.195 "get_zone_info": false, 00:21:31.195 "zone_management": false, 00:21:31.195 "zone_append": false, 00:21:31.195 "compare": false, 00:21:31.195 "compare_and_write": false, 00:21:31.195 "abort": false, 00:21:31.195 "seek_hole": true, 00:21:31.195 "seek_data": true, 00:21:31.195 "copy": false, 00:21:31.195 "nvme_iov_md": false 00:21:31.195 }, 00:21:31.195 "driver_specific": { 00:21:31.195 "lvol": { 00:21:31.195 "lvol_store_uuid": "a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f", 00:21:31.195 "base_bdev": "nvme0n1", 00:21:31.195 "thin_provision": true, 00:21:31.195 "num_allocated_clusters": 0, 00:21:31.195 "snapshot": false, 00:21:31.195 "clone": false, 00:21:31.195 "esnap_clone": false 00:21:31.195 } 00:21:31.195 } 00:21:31.195 } 00:21:31.195 ]' 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:31.195 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:31.195 16:09:29 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:31.195 16:09:29 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:31.454 16:09:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:31.454 16:09:29 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:31.454 16:09:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ce8ef06-d44d-465e-8b84-537c4dbe9480 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:31.454 { 00:21:31.454 "name": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:31.454 "aliases": [ 00:21:31.454 "lvs/nvme0n1p0" 00:21:31.454 ], 00:21:31.454 "product_name": "Logical Volume", 00:21:31.454 "block_size": 4096, 00:21:31.454 "num_blocks": 26476544, 00:21:31.454 "uuid": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:31.454 "assigned_rate_limits": { 00:21:31.454 "rw_ios_per_sec": 0, 00:21:31.454 "rw_mbytes_per_sec": 0, 00:21:31.454 "r_mbytes_per_sec": 0, 00:21:31.454 "w_mbytes_per_sec": 0 00:21:31.454 }, 00:21:31.454 "claimed": false, 00:21:31.454 "zoned": false, 00:21:31.454 "supported_io_types": { 00:21:31.454 "read": true, 00:21:31.454 "write": true, 00:21:31.454 "unmap": true, 00:21:31.454 "flush": false, 00:21:31.454 "reset": true, 00:21:31.454 "nvme_admin": false, 00:21:31.454 "nvme_io": false, 00:21:31.454 "nvme_io_md": false, 00:21:31.454 "write_zeroes": true, 00:21:31.454 "zcopy": false, 00:21:31.454 "get_zone_info": false, 00:21:31.454 "zone_management": false, 00:21:31.454 "zone_append": false, 00:21:31.454 "compare": false, 00:21:31.454 "compare_and_write": false, 00:21:31.454 "abort": false, 00:21:31.454 "seek_hole": true, 00:21:31.454 "seek_data": true, 00:21:31.454 "copy": false, 00:21:31.454 "nvme_iov_md": false 00:21:31.454 }, 00:21:31.454 "driver_specific": { 00:21:31.454 "lvol": { 00:21:31.454 "lvol_store_uuid": "a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f", 00:21:31.454 "base_bdev": "nvme0n1", 00:21:31.454 "thin_provision": true, 00:21:31.454 "num_allocated_clusters": 0, 00:21:31.454 "snapshot": false, 00:21:31.454 "clone": false, 00:21:31.454 "esnap_clone": false 00:21:31.454 } 00:21:31.454 } 00:21:31.454 } 00:21:31.454 ]' 00:21:31.454 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:31.715 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:31.715 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:31.715 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:31.715 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:31.715 16:09:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:31.715 16:09:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:31.715 16:09:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5ce8ef06-d44d-465e-8b84-537c4dbe9480 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:31.974 [2024-11-20 16:09:30.015122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.974 [2024-11-20 16:09:30.015167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:31.974 [2024-11-20 16:09:30.015184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:31.974 [2024-11-20 16:09:30.015193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.974 [2024-11-20 16:09:30.018222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.974 [2024-11-20 16:09:30.018261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.974 [2024-11-20 16:09:30.018274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:21:31.974 [2024-11-20 16:09:30.018282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.974 [2024-11-20 16:09:30.018496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:31.974 [2024-11-20 16:09:30.019202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:31.974 [2024-11-20 16:09:30.019225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.974 [2024-11-20 16:09:30.019233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.975 [2024-11-20 16:09:30.019243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:21:31.975 [2024-11-20 16:09:30.019251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.019320] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:21:31.975 [2024-11-20 16:09:30.020326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.020355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:31.975 [2024-11-20 16:09:30.020365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:31.975 [2024-11-20 16:09:30.020374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.025324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.025431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.975 [2024-11-20 16:09:30.025494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.895 ms 00:21:31.975 [2024-11-20 16:09:30.025559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.025694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.025738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.975 [2024-11-20 16:09:30.025809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:31.975 [2024-11-20 16:09:30.025911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.025966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.026009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:31.975 [2024-11-20 16:09:30.026062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:31.975 [2024-11-20 16:09:30.026093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.026134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:31.975 [2024-11-20 16:09:30.029633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.029745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.975 [2024-11-20 16:09:30.029810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.502 ms 00:21:31.975 [2024-11-20 16:09:30.029907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.029996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.030117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:31.975 [2024-11-20 16:09:30.030147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:31.975 [2024-11-20 16:09:30.030178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.030215] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:31.975 [2024-11-20 16:09:30.030364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:31.975 [2024-11-20 16:09:30.030477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:31.975 [2024-11-20 16:09:30.030513] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:31.975 [2024-11-20 16:09:30.030550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:31.975 [2024-11-20 16:09:30.030585] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:31.975 [2024-11-20 16:09:30.030620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:31.975 [2024-11-20 16:09:30.030686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:31.975 [2024-11-20 16:09:30.030714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:31.975 [2024-11-20 16:09:30.030751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:31.975 [2024-11-20 16:09:30.030774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.030793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:31.975 [2024-11-20 16:09:30.030819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:21:31.975 [2024-11-20 16:09:30.030908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.031020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.975 [2024-11-20 16:09:30.031045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:31.975 [2024-11-20 16:09:30.031103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:31.975 [2024-11-20 16:09:30.031129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.975 [2024-11-20 16:09:30.031260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:31.975 [2024-11-20 16:09:30.031295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:31.975 [2024-11-20 16:09:30.031318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:31.975 [2024-11-20 16:09:30.031390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:31.975 [2024-11-20 16:09:30.031414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:31.975 [2024-11-20 16:09:30.031428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:31.975 [2024-11-20 16:09:30.031435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:31.975 [2024-11-20 16:09:30.031443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:31.975 [2024-11-20 16:09:30.031450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:31.975 [2024-11-20 16:09:30.031458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:31.975 [2024-11-20 16:09:30.031464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:31.975 [2024-11-20 16:09:30.031480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:31.975 [2024-11-20 16:09:30.031504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:31.975 [2024-11-20 16:09:30.031525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:31.975 [2024-11-20 16:09:30.031547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:31.975 [2024-11-20 16:09:30.031567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:31.975 [2024-11-20 16:09:30.031581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:31.975 [2024-11-20 16:09:30.031590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:31.975 [2024-11-20 16:09:30.031605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:31.975 [2024-11-20 16:09:30.031611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:31.975 [2024-11-20 16:09:30.031618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:31.975 [2024-11-20 16:09:30.031625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:31.975 [2024-11-20 16:09:30.031633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:31.975 [2024-11-20 16:09:30.031639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:31.975 [2024-11-20 16:09:30.031654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:31.975 [2024-11-20 16:09:30.031661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.975 [2024-11-20 16:09:30.031667] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:31.975 [2024-11-20 16:09:30.031676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:31.975 [2024-11-20 16:09:30.031683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:31.976 [2024-11-20 16:09:30.031693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:31.976 [2024-11-20 16:09:30.031700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:31.976 [2024-11-20 16:09:30.031709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:31.976 [2024-11-20 16:09:30.031716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:31.976 [2024-11-20 16:09:30.031734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:31.976 [2024-11-20 16:09:30.031741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:31.976 [2024-11-20 16:09:30.031749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:31.976 [2024-11-20 16:09:30.031758] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:31.976 [2024-11-20 16:09:30.031769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:31.976 [2024-11-20 16:09:30.031788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:31.976 [2024-11-20 16:09:30.031796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:31.976 [2024-11-20 16:09:30.031804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:31.976 [2024-11-20 16:09:30.031811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:31.976 [2024-11-20 16:09:30.031825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:31.976 [2024-11-20 16:09:30.031832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:31.976 [2024-11-20 16:09:30.031841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:31.976 [2024-11-20 16:09:30.031848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:31.976 [2024-11-20 16:09:30.031859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:31.976 [2024-11-20 16:09:30.031909] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:31.976 [2024-11-20 16:09:30.031921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:31.976 [2024-11-20 16:09:30.031938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:31.976 [2024-11-20 16:09:30.031945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:31.976 [2024-11-20 16:09:30.031953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:31.976 [2024-11-20 16:09:30.031961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.976 [2024-11-20 16:09:30.031970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:31.976 [2024-11-20 16:09:30.031978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:21:31.976 [2024-11-20 16:09:30.031986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.976 [2024-11-20 16:09:30.032048] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:31.976 [2024-11-20 16:09:30.032061] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:34.506 [2024-11-20 16:09:32.143930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.144135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:34.506 [2024-11-20 16:09:32.144156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2111.871 ms 00:21:34.506 [2024-11-20 16:09:32.144167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.169266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.169312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.506 [2024-11-20 16:09:32.169325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.868 ms 00:21:34.506 [2024-11-20 16:09:32.169335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.169469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.169486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.506 [2024-11-20 16:09:32.169495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:34.506 [2024-11-20 16:09:32.169506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.210962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.211010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.506 [2024-11-20 16:09:32.211023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.411 ms 00:21:34.506 [2024-11-20 16:09:32.211033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.211123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.211136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.506 [2024-11-20 16:09:32.211145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.506 [2024-11-20 16:09:32.211154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.211456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.211481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.506 [2024-11-20 16:09:32.211490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:34.506 [2024-11-20 16:09:32.211499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.211618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.211631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.506 [2024-11-20 16:09:32.211640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:34.506 [2024-11-20 16:09:32.211651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.225678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.225708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.506 [2024-11-20 16:09:32.225718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.992 ms 00:21:34.506 [2024-11-20 16:09:32.225738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.237199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:34.506 [2024-11-20 16:09:32.250827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.250958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.506 [2024-11-20 16:09:32.251017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:21:34.506 [2024-11-20 16:09:32.251044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.314281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.314484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:34.506 [2024-11-20 16:09:32.314555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.135 ms 00:21:34.506 [2024-11-20 16:09:32.314584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.314809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.314844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.506 [2024-11-20 16:09:32.314924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:21:34.506 [2024-11-20 16:09:32.314950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.337575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.337703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:34.506 [2024-11-20 16:09:32.337777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.584 ms 00:21:34.506 [2024-11-20 16:09:32.337806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.360127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.360232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:34.506 [2024-11-20 16:09:32.360302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:21:34.506 [2024-11-20 16:09:32.360323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.506 [2024-11-20 16:09:32.360922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.506 [2024-11-20 16:09:32.361000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.507 [2024-11-20 16:09:32.361054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:21:34.507 [2024-11-20 16:09:32.361065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.426314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.426369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:34.507 [2024-11-20 16:09:32.426386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.218 ms 00:21:34.507 [2024-11-20 16:09:32.426394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.450337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.450371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:34.507 [2024-11-20 16:09:32.450384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.857 ms 00:21:34.507 [2024-11-20 16:09:32.450392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.473082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.473115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:34.507 [2024-11-20 16:09:32.473127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:21:34.507 [2024-11-20 16:09:32.473135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.496138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.496273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:34.507 [2024-11-20 16:09:32.496293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.938 ms 00:21:34.507 [2024-11-20 16:09:32.496310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.496367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.496383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:34.507 [2024-11-20 16:09:32.496396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.507 [2024-11-20 16:09:32.496404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.496474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.507 [2024-11-20 16:09:32.496486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:34.507 [2024-11-20 16:09:32.496497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:34.507 [2024-11-20 16:09:32.496504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.507 [2024-11-20 16:09:32.497333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.507 { 00:21:34.507 "name": "ftl0", 00:21:34.507 "uuid": "bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7" 00:21:34.507 } 00:21:34.507 [2024-11-20 16:09:32.500402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2481.915 ms, result 0 00:21:34.507 [2024-11-20 16:09:32.501175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.507 16:09:32 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:34.507 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:34.765 [ 00:21:34.765 { 00:21:34.765 "name": "ftl0", 00:21:34.765 "aliases": [ 00:21:34.765 "bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7" 00:21:34.765 ], 00:21:34.765 "product_name": "FTL disk", 00:21:34.765 "block_size": 4096, 00:21:34.765 "num_blocks": 23592960, 00:21:34.765 "uuid": "bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7", 00:21:34.765 "assigned_rate_limits": { 00:21:34.765 "rw_ios_per_sec": 0, 00:21:34.765 "rw_mbytes_per_sec": 0, 00:21:34.765 "r_mbytes_per_sec": 0, 00:21:34.765 "w_mbytes_per_sec": 0 00:21:34.765 }, 00:21:34.765 "claimed": false, 00:21:34.765 "zoned": false, 00:21:34.765 "supported_io_types": { 00:21:34.765 "read": true, 00:21:34.765 "write": true, 00:21:34.765 "unmap": true, 00:21:34.765 "flush": true, 00:21:34.765 "reset": false, 00:21:34.765 "nvme_admin": false, 00:21:34.765 "nvme_io": false, 00:21:34.765 "nvme_io_md": false, 00:21:34.765 "write_zeroes": true, 00:21:34.765 "zcopy": false, 00:21:34.765 "get_zone_info": false, 00:21:34.765 "zone_management": false, 00:21:34.765 "zone_append": false, 00:21:34.765 "compare": false, 00:21:34.765 "compare_and_write": false, 00:21:34.765 "abort": false, 00:21:34.765 "seek_hole": false, 00:21:34.765 "seek_data": false, 00:21:34.765 "copy": false, 00:21:34.765 "nvme_iov_md": false 00:21:34.765 }, 00:21:34.765 "driver_specific": { 00:21:34.765 "ftl": { 00:21:34.765 "base_bdev": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:34.765 "cache": "nvc0n1p0" 00:21:34.765 } 00:21:34.765 } 00:21:34.765 } 00:21:34.765 ] 00:21:34.765 16:09:32 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:34.765 16:09:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:34.765 16:09:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:35.022 16:09:33 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:35.022 16:09:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:35.279 16:09:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:35.279 { 00:21:35.279 "name": "ftl0", 00:21:35.279 "aliases": [ 00:21:35.279 "bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7" 00:21:35.279 ], 00:21:35.279 "product_name": "FTL disk", 00:21:35.279 "block_size": 4096, 00:21:35.279 "num_blocks": 23592960, 00:21:35.279 "uuid": "bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7", 00:21:35.279 "assigned_rate_limits": { 00:21:35.279 "rw_ios_per_sec": 0, 00:21:35.279 "rw_mbytes_per_sec": 0, 00:21:35.279 "r_mbytes_per_sec": 0, 00:21:35.279 "w_mbytes_per_sec": 0 00:21:35.279 }, 00:21:35.279 "claimed": false, 00:21:35.279 "zoned": false, 00:21:35.279 "supported_io_types": { 00:21:35.279 "read": true, 00:21:35.279 "write": true, 00:21:35.279 "unmap": true, 00:21:35.279 "flush": true, 00:21:35.279 "reset": false, 00:21:35.280 "nvme_admin": false, 00:21:35.280 "nvme_io": false, 00:21:35.280 "nvme_io_md": false, 00:21:35.280 "write_zeroes": true, 00:21:35.280 "zcopy": false, 00:21:35.280 "get_zone_info": false, 00:21:35.280 "zone_management": false, 00:21:35.280 "zone_append": false, 00:21:35.280 "compare": false, 00:21:35.280 "compare_and_write": false, 00:21:35.280 "abort": false, 00:21:35.280 "seek_hole": false, 00:21:35.280 "seek_data": false, 00:21:35.280 "copy": false, 00:21:35.280 "nvme_iov_md": false 00:21:35.280 }, 00:21:35.280 "driver_specific": { 00:21:35.280 "ftl": { 00:21:35.280 "base_bdev": "5ce8ef06-d44d-465e-8b84-537c4dbe9480", 00:21:35.280 "cache": "nvc0n1p0" 00:21:35.280 } 00:21:35.280 } 00:21:35.280 } 00:21:35.280 ]' 00:21:35.280 16:09:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:35.280 16:09:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:35.280 16:09:33 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:35.539 [2024-11-20 16:09:33.568449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.568493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:35.539 [2024-11-20 16:09:33.568508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.539 [2024-11-20 16:09:33.568520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.568546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:35.539 [2024-11-20 16:09:33.571145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.571283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:35.539 [2024-11-20 16:09:33.571307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:21:35.539 [2024-11-20 16:09:33.571315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.571837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.571852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:35.539 [2024-11-20 16:09:33.571863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:21:35.539 [2024-11-20 16:09:33.571870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.575592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.575662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:35.539 [2024-11-20 16:09:33.575716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:21:35.539 [2024-11-20 16:09:33.575776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.582815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.582903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:35.539 [2024-11-20 16:09:33.582978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.974 ms 00:21:35.539 [2024-11-20 16:09:33.583006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.605891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.606002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:35.539 [2024-11-20 16:09:33.606089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.764 ms 00:21:35.539 [2024-11-20 16:09:33.606115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.620728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.620836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:35.539 [2024-11-20 16:09:33.620890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:21:35.539 [2024-11-20 16:09:33.620915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.621113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.621306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:35.539 [2024-11-20 16:09:33.621337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:35.539 [2024-11-20 16:09:33.621360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.644381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.644490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:35.539 [2024-11-20 16:09:33.644545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:21:35.539 [2024-11-20 16:09:33.644571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.666373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.666543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:35.539 [2024-11-20 16:09:33.666604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.726 ms 00:21:35.539 [2024-11-20 16:09:33.666630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.688559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.688667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:35.539 [2024-11-20 16:09:33.688730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.862 ms 00:21:35.539 [2024-11-20 16:09:33.688754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.710772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.539 [2024-11-20 16:09:33.710895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:35.539 [2024-11-20 16:09:33.710972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.901 ms 00:21:35.539 [2024-11-20 16:09:33.710999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.539 [2024-11-20 16:09:33.711073] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:35.539 [2024-11-20 16:09:33.711116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.711971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.712962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:35.539 [2024-11-20 16:09:33.713403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.713987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:35.540 [2024-11-20 16:09:33.714490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:35.540 [2024-11-20 16:09:33.714502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:21:35.540 [2024-11-20 16:09:33.714509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:35.540 [2024-11-20 16:09:33.714518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:35.540 [2024-11-20 16:09:33.714525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:35.540 [2024-11-20 16:09:33.714537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:35.540 [2024-11-20 16:09:33.714544] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:35.540 [2024-11-20 16:09:33.714553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:35.540 [2024-11-20 16:09:33.714561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:35.540 [2024-11-20 16:09:33.714568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:35.540 [2024-11-20 16:09:33.714575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:35.540 [2024-11-20 16:09:33.714584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.540 [2024-11-20 16:09:33.714591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:35.540 [2024-11-20 16:09:33.714601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:21:35.540 [2024-11-20 16:09:33.714608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.727029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.540 [2024-11-20 16:09:33.727061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:35.540 [2024-11-20 16:09:33.727077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:21:35.540 [2024-11-20 16:09:33.727085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.727452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.540 [2024-11-20 16:09:33.727466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:35.540 [2024-11-20 16:09:33.727477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:21:35.540 [2024-11-20 16:09:33.727484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.770549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.540 [2024-11-20 16:09:33.770584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.540 [2024-11-20 16:09:33.770596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.540 [2024-11-20 16:09:33.770604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.770720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.540 [2024-11-20 16:09:33.770748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.540 [2024-11-20 16:09:33.770759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.540 [2024-11-20 16:09:33.770767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.770820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.540 [2024-11-20 16:09:33.770833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.540 [2024-11-20 16:09:33.770847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.540 [2024-11-20 16:09:33.770854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.540 [2024-11-20 16:09:33.770877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.540 [2024-11-20 16:09:33.770886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.540 [2024-11-20 16:09:33.770894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.540 [2024-11-20 16:09:33.770902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.850016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.850054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.798 [2024-11-20 16:09:33.850066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.850074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.798 [2024-11-20 16:09:33.911450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.798 [2024-11-20 16:09:33.911578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.798 [2024-11-20 16:09:33.911648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.798 [2024-11-20 16:09:33.911801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.798 [2024-11-20 16:09:33.911874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.911925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.798 [2024-11-20 16:09:33.911937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.798 [2024-11-20 16:09:33.911948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.798 [2024-11-20 16:09:33.911955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.798 [2024-11-20 16:09:33.912005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.799 [2024-11-20 16:09:33.912018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.799 [2024-11-20 16:09:33.912027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.799 [2024-11-20 16:09:33.912035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.799 [2024-11-20 16:09:33.912198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.734 ms, result 0 00:21:35.799 true 00:21:35.799 16:09:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76439 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76439 ']' 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76439 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76439 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76439' 00:21:35.799 killing process with pid 76439 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76439 00:21:35.799 16:09:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76439 00:21:42.351 16:09:40 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:42.608 65536+0 records in 00:21:42.608 65536+0 records out 00:21:42.608 268435456 bytes (268 MB, 256 MiB) copied, 0.811437 s, 331 MB/s 00:21:42.608 16:09:40 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:42.867 [2024-11-20 16:09:40.882648] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:42.867 [2024-11-20 16:09:40.882953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76610 ] 00:21:42.867 [2024-11-20 16:09:41.042511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.124 [2024-11-20 16:09:41.140011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.384 [2024-11-20 16:09:41.396066] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.384 [2024-11-20 16:09:41.396127] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.384 [2024-11-20 16:09:41.550047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.550101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.384 [2024-11-20 16:09:41.550115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:43.384 [2024-11-20 16:09:41.550123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.552804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.552882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.384 [2024-11-20 16:09:41.552895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:21:43.384 [2024-11-20 16:09:41.552902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.553079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.384 [2024-11-20 16:09:41.553771] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.384 [2024-11-20 16:09:41.553797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.553805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.384 [2024-11-20 16:09:41.553814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:21:43.384 [2024-11-20 16:09:41.553822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.554957] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.384 [2024-11-20 16:09:41.566954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.566989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.384 [2024-11-20 16:09:41.567001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.998 ms 00:21:43.384 [2024-11-20 16:09:41.567008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.567104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.567115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.384 [2024-11-20 16:09:41.567124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:43.384 [2024-11-20 16:09:41.567132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.571823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.571851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.384 [2024-11-20 16:09:41.571861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:21:43.384 [2024-11-20 16:09:41.571868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.571958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.571967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.384 [2024-11-20 16:09:41.571975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:43.384 [2024-11-20 16:09:41.571982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.572007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.572017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.384 [2024-11-20 16:09:41.572025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.384 [2024-11-20 16:09:41.572033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.572053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:43.384 [2024-11-20 16:09:41.575164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.575292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.384 [2024-11-20 16:09:41.575307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:21:43.384 [2024-11-20 16:09:41.575315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.575350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.575358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.384 [2024-11-20 16:09:41.575366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:43.384 [2024-11-20 16:09:41.575373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.575390] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.384 [2024-11-20 16:09:41.575412] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.384 [2024-11-20 16:09:41.575445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.384 [2024-11-20 16:09:41.575460] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.384 [2024-11-20 16:09:41.575561] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.384 [2024-11-20 16:09:41.575571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.384 [2024-11-20 16:09:41.575582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.384 [2024-11-20 16:09:41.575591] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.384 [2024-11-20 16:09:41.575603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.384 [2024-11-20 16:09:41.575611] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:43.384 [2024-11-20 16:09:41.575618] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.384 [2024-11-20 16:09:41.575625] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.384 [2024-11-20 16:09:41.575632] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.384 [2024-11-20 16:09:41.575639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.575647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.384 [2024-11-20 16:09:41.575655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:21:43.384 [2024-11-20 16:09:41.575662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.575782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.384 [2024-11-20 16:09:41.575795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.384 [2024-11-20 16:09:41.575803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:43.384 [2024-11-20 16:09:41.575810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.384 [2024-11-20 16:09:41.575912] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.384 [2024-11-20 16:09:41.575922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.384 [2024-11-20 16:09:41.575929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.384 [2024-11-20 16:09:41.575937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.384 [2024-11-20 16:09:41.575944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.384 [2024-11-20 16:09:41.575951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.384 [2024-11-20 16:09:41.575958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:43.384 [2024-11-20 16:09:41.575964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.384 [2024-11-20 16:09:41.575972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:43.384 [2024-11-20 16:09:41.575979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.384 [2024-11-20 16:09:41.575985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.384 [2024-11-20 16:09:41.575992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:43.384 [2024-11-20 16:09:41.575998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.384 [2024-11-20 16:09:41.576011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.384 [2024-11-20 16:09:41.576018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:43.384 [2024-11-20 16:09:41.576024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.384 [2024-11-20 16:09:41.576031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.384 [2024-11-20 16:09:41.576037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:43.384 [2024-11-20 16:09:41.576043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.384 [2024-11-20 16:09:41.576050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.384 [2024-11-20 16:09:41.576057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:43.384 [2024-11-20 16:09:41.576065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.384 [2024-11-20 16:09:41.576071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.384 [2024-11-20 16:09:41.576078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:43.384 [2024-11-20 16:09:41.576084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.385 [2024-11-20 16:09:41.576090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.385 [2024-11-20 16:09:41.576097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.385 [2024-11-20 16:09:41.576110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.385 [2024-11-20 16:09:41.576116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.385 [2024-11-20 16:09:41.576128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.385 [2024-11-20 16:09:41.576135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.385 [2024-11-20 16:09:41.576148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.385 [2024-11-20 16:09:41.576154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:43.385 [2024-11-20 16:09:41.576160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.385 [2024-11-20 16:09:41.576167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.385 [2024-11-20 16:09:41.576174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:43.385 [2024-11-20 16:09:41.576180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.385 [2024-11-20 16:09:41.576193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:43.385 [2024-11-20 16:09:41.576199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.385 [2024-11-20 16:09:41.576213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.385 [2024-11-20 16:09:41.576221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.385 [2024-11-20 16:09:41.576229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.385 [2024-11-20 16:09:41.576237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.385 [2024-11-20 16:09:41.576243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.385 [2024-11-20 16:09:41.576249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.385 [2024-11-20 16:09:41.576256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.385 [2024-11-20 16:09:41.576262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.385 [2024-11-20 16:09:41.576268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.385 [2024-11-20 16:09:41.576277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.385 [2024-11-20 16:09:41.576285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:43.385 [2024-11-20 16:09:41.576301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:43.385 [2024-11-20 16:09:41.576308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:43.385 [2024-11-20 16:09:41.576315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:43.385 [2024-11-20 16:09:41.576322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:43.385 [2024-11-20 16:09:41.576329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:43.385 [2024-11-20 16:09:41.576336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:43.385 [2024-11-20 16:09:41.576342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:43.385 [2024-11-20 16:09:41.576349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:43.385 [2024-11-20 16:09:41.576356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:43.385 [2024-11-20 16:09:41.576391] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.385 [2024-11-20 16:09:41.576399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.385 [2024-11-20 16:09:41.576413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.385 [2024-11-20 16:09:41.576420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.385 [2024-11-20 16:09:41.576427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.385 [2024-11-20 16:09:41.576435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.385 [2024-11-20 16:09:41.576442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.385 [2024-11-20 16:09:41.576452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:21:43.385 [2024-11-20 16:09:41.576458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.385 [2024-11-20 16:09:41.601719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.385 [2024-11-20 16:09:41.601763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.385 [2024-11-20 16:09:41.601774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.195 ms 00:21:43.385 [2024-11-20 16:09:41.601781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.385 [2024-11-20 16:09:41.601902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.385 [2024-11-20 16:09:41.601916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.385 [2024-11-20 16:09:41.601924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:43.385 [2024-11-20 16:09:41.601931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.643 [2024-11-20 16:09:41.641933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.643 [2024-11-20 16:09:41.641981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.643 [2024-11-20 16:09:41.641994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.979 ms 00:21:43.643 [2024-11-20 16:09:41.642005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.643 [2024-11-20 16:09:41.642115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.643 [2024-11-20 16:09:41.642127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.643 [2024-11-20 16:09:41.642136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:43.643 [2024-11-20 16:09:41.642144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.643 [2024-11-20 16:09:41.642453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.643 [2024-11-20 16:09:41.642468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.644 [2024-11-20 16:09:41.642477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:21:43.644 [2024-11-20 16:09:41.642490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.642614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.642623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.644 [2024-11-20 16:09:41.642631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:43.644 [2024-11-20 16:09:41.642637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.655699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.655879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.644 [2024-11-20 16:09:41.655895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.041 ms 00:21:43.644 [2024-11-20 16:09:41.655903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.667974] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:43.644 [2024-11-20 16:09:41.668008] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.644 [2024-11-20 16:09:41.668019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.668028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.644 [2024-11-20 16:09:41.668036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.013 ms 00:21:43.644 [2024-11-20 16:09:41.668044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.692006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.692038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.644 [2024-11-20 16:09:41.692056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.890 ms 00:21:43.644 [2024-11-20 16:09:41.692064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.703633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.703782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.644 [2024-11-20 16:09:41.703797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.501 ms 00:21:43.644 [2024-11-20 16:09:41.703805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.714904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.715012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.644 [2024-11-20 16:09:41.715027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.039 ms 00:21:43.644 [2024-11-20 16:09:41.715036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.715661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.715676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.644 [2024-11-20 16:09:41.715685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:21:43.644 [2024-11-20 16:09:41.715693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.768801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.768855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.644 [2024-11-20 16:09:41.768868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.083 ms 00:21:43.644 [2024-11-20 16:09:41.768875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.779256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:43.644 [2024-11-20 16:09:41.792782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.792819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.644 [2024-11-20 16:09:41.792831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.799 ms 00:21:43.644 [2024-11-20 16:09:41.792839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.792924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.792937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.644 [2024-11-20 16:09:41.792946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:43.644 [2024-11-20 16:09:41.792953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.792999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.793007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.644 [2024-11-20 16:09:41.793015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:43.644 [2024-11-20 16:09:41.793022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.793051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.793059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.644 [2024-11-20 16:09:41.793069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:43.644 [2024-11-20 16:09:41.793075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.793105] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.644 [2024-11-20 16:09:41.793115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.793123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.644 [2024-11-20 16:09:41.793130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:43.644 [2024-11-20 16:09:41.793137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.815739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.815788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.644 [2024-11-20 16:09:41.815799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:21:43.644 [2024-11-20 16:09:41.815806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.815890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.644 [2024-11-20 16:09:41.815901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.644 [2024-11-20 16:09:41.815909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:43.644 [2024-11-20 16:09:41.815916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.644 [2024-11-20 16:09:41.816663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.644 [2024-11-20 16:09:41.819515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.349 ms, result 0 00:21:43.644 [2024-11-20 16:09:41.820258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:43.644 [2024-11-20 16:09:41.833057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.153  [2024-11-20T16:09:43.969Z] Copying: 43/256 [MB] (43 MBps) [2024-11-20T16:09:44.902Z] Copying: 88/256 [MB] (44 MBps) [2024-11-20T16:09:46.276Z] Copying: 130/256 [MB] (42 MBps) [2024-11-20T16:09:46.841Z] Copying: 172/256 [MB] (41 MBps) [2024-11-20T16:09:47.772Z] Copying: 215/256 [MB] (43 MBps) [2024-11-20T16:09:47.772Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-20 16:09:47.747991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:49.522 [2024-11-20 16:09:47.757345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.522 [2024-11-20 16:09:47.757527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:49.522 [2024-11-20 16:09:47.757547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.522 [2024-11-20 16:09:47.757556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.522 [2024-11-20 16:09:47.757586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:49.522 [2024-11-20 16:09:47.760192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.522 [2024-11-20 16:09:47.760222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:49.522 [2024-11-20 16:09:47.760233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:21:49.522 [2024-11-20 16:09:47.760241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.522 [2024-11-20 16:09:47.761833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.522 [2024-11-20 16:09:47.761864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:49.522 [2024-11-20 16:09:47.761873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:21:49.522 [2024-11-20 16:09:47.761881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.522 [2024-11-20 16:09:47.768424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.522 [2024-11-20 16:09:47.768546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:49.522 [2024-11-20 16:09:47.768566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.527 ms 00:21:49.522 [2024-11-20 16:09:47.768574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.775701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.775828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:49.782 [2024-11-20 16:09:47.775843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.081 ms 00:21:49.782 [2024-11-20 16:09:47.775851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.799254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.799378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:49.782 [2024-11-20 16:09:47.799394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.342 ms 00:21:49.782 [2024-11-20 16:09:47.799402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.813431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.813464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:49.782 [2024-11-20 16:09:47.813480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:21:49.782 [2024-11-20 16:09:47.813490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.813624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.813634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:49.782 [2024-11-20 16:09:47.813642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:49.782 [2024-11-20 16:09:47.813649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.837083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.837216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:49.782 [2024-11-20 16:09:47.837232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.417 ms 00:21:49.782 [2024-11-20 16:09:47.837240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.860519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.860651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:49.782 [2024-11-20 16:09:47.860667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.243 ms 00:21:49.782 [2024-11-20 16:09:47.860676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.883907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.883939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:49.782 [2024-11-20 16:09:47.883949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.197 ms 00:21:49.782 [2024-11-20 16:09:47.883956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.906351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.782 [2024-11-20 16:09:47.906386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.782 [2024-11-20 16:09:47.906397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.316 ms 00:21:49.782 [2024-11-20 16:09:47.906404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.782 [2024-11-20 16:09:47.906439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.782 [2024-11-20 16:09:47.906459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.782 [2024-11-20 16:09:47.906684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.906995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.783 [2024-11-20 16:09:47.907258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.783 [2024-11-20 16:09:47.907270] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:21:49.783 [2024-11-20 16:09:47.907278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:49.783 [2024-11-20 16:09:47.907285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:49.783 [2024-11-20 16:09:47.907292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:49.783 [2024-11-20 16:09:47.907299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:49.783 [2024-11-20 16:09:47.907306] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.783 [2024-11-20 16:09:47.907313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.783 [2024-11-20 16:09:47.907320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.783 [2024-11-20 16:09:47.907326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.783 [2024-11-20 16:09:47.907332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.783 [2024-11-20 16:09:47.907339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.783 [2024-11-20 16:09:47.907346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.783 [2024-11-20 16:09:47.907358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:21:49.783 [2024-11-20 16:09:47.907366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.783 [2024-11-20 16:09:47.919650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.783 [2024-11-20 16:09:47.919682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.783 [2024-11-20 16:09:47.919692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:21:49.783 [2024-11-20 16:09:47.919700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.783 [2024-11-20 16:09:47.920072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.784 [2024-11-20 16:09:47.920092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.784 [2024-11-20 16:09:47.920100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:21:49.784 [2024-11-20 16:09:47.920107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.784 [2024-11-20 16:09:47.955411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.784 [2024-11-20 16:09:47.955455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.784 [2024-11-20 16:09:47.955466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.784 [2024-11-20 16:09:47.955473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.784 [2024-11-20 16:09:47.955553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.784 [2024-11-20 16:09:47.955566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.784 [2024-11-20 16:09:47.955574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.784 [2024-11-20 16:09:47.955581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.784 [2024-11-20 16:09:47.955629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.784 [2024-11-20 16:09:47.955639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.784 [2024-11-20 16:09:47.955647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.784 [2024-11-20 16:09:47.955654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.784 [2024-11-20 16:09:47.955670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.784 [2024-11-20 16:09:47.955678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.784 [2024-11-20 16:09:47.955687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.784 [2024-11-20 16:09:47.955694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.042 [2024-11-20 16:09:48.032387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.042 [2024-11-20 16:09:48.032433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.042 [2024-11-20 16:09:48.032444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.042 [2024-11-20 16:09:48.032452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.042 [2024-11-20 16:09:48.095181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.042 [2024-11-20 16:09:48.095229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.042 [2024-11-20 16:09:48.095240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.042 [2024-11-20 16:09:48.095248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.042 [2024-11-20 16:09:48.095297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.042 [2024-11-20 16:09:48.095307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.042 [2024-11-20 16:09:48.095314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.042 [2024-11-20 16:09:48.095321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.042 [2024-11-20 16:09:48.095349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.042 [2024-11-20 16:09:48.095356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.042 [2024-11-20 16:09:48.095364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.042 [2024-11-20 16:09:48.095374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.042 [2024-11-20 16:09:48.095458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.042 [2024-11-20 16:09:48.095468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.042 [2024-11-20 16:09:48.095476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.043 [2024-11-20 16:09:48.095483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.043 [2024-11-20 16:09:48.095514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.043 [2024-11-20 16:09:48.095522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:50.043 [2024-11-20 16:09:48.095529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.043 [2024-11-20 16:09:48.095537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.043 [2024-11-20 16:09:48.095573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.043 [2024-11-20 16:09:48.095582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.043 [2024-11-20 16:09:48.095590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.043 [2024-11-20 16:09:48.095598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.043 [2024-11-20 16:09:48.095636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.043 [2024-11-20 16:09:48.095646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.043 [2024-11-20 16:09:48.095654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.043 [2024-11-20 16:09:48.095664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.043 [2024-11-20 16:09:48.095815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.462 ms, result 0 00:21:50.976 00:21:50.976 00:21:50.976 16:09:49 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76697 00:21:50.976 16:09:49 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76697 00:21:50.976 16:09:49 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:50.976 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76697 ']' 00:21:50.976 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.976 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.976 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.977 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.977 16:09:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:50.977 [2024-11-20 16:09:49.184349] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:50.977 [2024-11-20 16:09:49.184466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76697 ] 00:21:51.233 [2024-11-20 16:09:49.340953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.233 [2024-11-20 16:09:49.428287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.799 16:09:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.799 16:09:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:51.799 16:09:50 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:52.062 [2024-11-20 16:09:50.229909] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.062 [2024-11-20 16:09:50.229972] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.430 [2024-11-20 16:09:50.400249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.400296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:52.430 [2024-11-20 16:09:50.400310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.430 [2024-11-20 16:09:50.400318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.402978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.403010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.430 [2024-11-20 16:09:50.403021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:21:52.430 [2024-11-20 16:09:50.403029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.403148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:52.430 [2024-11-20 16:09:50.403876] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:52.430 [2024-11-20 16:09:50.403905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.403913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.430 [2024-11-20 16:09:50.403924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:21:52.430 [2024-11-20 16:09:50.403931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.405004] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:52.430 [2024-11-20 16:09:50.417342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.417377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:52.430 [2024-11-20 16:09:50.417388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.343 ms 00:21:52.430 [2024-11-20 16:09:50.417398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.417485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.417497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:52.430 [2024-11-20 16:09:50.417505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:52.430 [2024-11-20 16:09:50.417514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.422192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.422226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.430 [2024-11-20 16:09:50.422235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.632 ms 00:21:52.430 [2024-11-20 16:09:50.422247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.422336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.422348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.430 [2024-11-20 16:09:50.422356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:52.430 [2024-11-20 16:09:50.422364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.422391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.422400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:52.430 [2024-11-20 16:09:50.422408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:52.430 [2024-11-20 16:09:50.422416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.422438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:52.430 [2024-11-20 16:09:50.425696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.425734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.430 [2024-11-20 16:09:50.425745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:21:52.430 [2024-11-20 16:09:50.425752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.425787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.430 [2024-11-20 16:09:50.425795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:52.430 [2024-11-20 16:09:50.425805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:52.430 [2024-11-20 16:09:50.425813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.430 [2024-11-20 16:09:50.425834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:52.430 [2024-11-20 16:09:50.425850] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:52.430 [2024-11-20 16:09:50.425888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:52.430 [2024-11-20 16:09:50.425904] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:52.430 [2024-11-20 16:09:50.426008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:52.430 [2024-11-20 16:09:50.426022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:52.430 [2024-11-20 16:09:50.426037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:52.430 [2024-11-20 16:09:50.426047] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:52.430 [2024-11-20 16:09:50.426057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:52.430 [2024-11-20 16:09:50.426065] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:52.431 [2024-11-20 16:09:50.426074] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:52.431 [2024-11-20 16:09:50.426081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:52.431 [2024-11-20 16:09:50.426091] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:52.431 [2024-11-20 16:09:50.426098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.431 [2024-11-20 16:09:50.426106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:52.431 [2024-11-20 16:09:50.426114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:52.431 [2024-11-20 16:09:50.426122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.431 [2024-11-20 16:09:50.426209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.431 [2024-11-20 16:09:50.426218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:52.431 [2024-11-20 16:09:50.426226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:52.431 [2024-11-20 16:09:50.426234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.431 [2024-11-20 16:09:50.426331] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:52.431 [2024-11-20 16:09:50.426342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:52.431 [2024-11-20 16:09:50.426350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:52.431 [2024-11-20 16:09:50.426374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:52.431 [2024-11-20 16:09:50.426398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.431 [2024-11-20 16:09:50.426412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:52.431 [2024-11-20 16:09:50.426420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:52.431 [2024-11-20 16:09:50.426426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.431 [2024-11-20 16:09:50.426434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:52.431 [2024-11-20 16:09:50.426441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:52.431 [2024-11-20 16:09:50.426451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:52.431 [2024-11-20 16:09:50.426465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:52.431 [2024-11-20 16:09:50.426493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:52.431 [2024-11-20 16:09:50.426517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:52.431 [2024-11-20 16:09:50.426537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:52.431 [2024-11-20 16:09:50.426564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:52.431 [2024-11-20 16:09:50.426585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.431 [2024-11-20 16:09:50.426600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:52.431 [2024-11-20 16:09:50.426608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:52.431 [2024-11-20 16:09:50.426615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.431 [2024-11-20 16:09:50.426622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:52.431 [2024-11-20 16:09:50.426629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:52.431 [2024-11-20 16:09:50.426638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:52.431 [2024-11-20 16:09:50.426652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:52.431 [2024-11-20 16:09:50.426659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:52.431 [2024-11-20 16:09:50.426675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:52.431 [2024-11-20 16:09:50.426683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.431 [2024-11-20 16:09:50.426700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:52.431 [2024-11-20 16:09:50.426706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:52.431 [2024-11-20 16:09:50.426714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:52.431 [2024-11-20 16:09:50.426731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:52.431 [2024-11-20 16:09:50.426739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:52.431 [2024-11-20 16:09:50.426746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:52.431 [2024-11-20 16:09:50.426755] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:52.431 [2024-11-20 16:09:50.426764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:52.431 [2024-11-20 16:09:50.426782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:52.431 [2024-11-20 16:09:50.426792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:52.431 [2024-11-20 16:09:50.426800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:52.431 [2024-11-20 16:09:50.426811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:52.431 [2024-11-20 16:09:50.426818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:52.431 [2024-11-20 16:09:50.426826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:52.431 [2024-11-20 16:09:50.426833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:52.431 [2024-11-20 16:09:50.426841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:52.431 [2024-11-20 16:09:50.426848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:52.431 [2024-11-20 16:09:50.426887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:52.431 [2024-11-20 16:09:50.426895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:52.431 [2024-11-20 16:09:50.426912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:52.431 [2024-11-20 16:09:50.426920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:52.431 [2024-11-20 16:09:50.426927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:52.431 [2024-11-20 16:09:50.426936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.431 [2024-11-20 16:09:50.426943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:52.431 [2024-11-20 16:09:50.426951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:21:52.431 [2024-11-20 16:09:50.426958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.431 [2024-11-20 16:09:50.452392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.431 [2024-11-20 16:09:50.452530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.431 [2024-11-20 16:09:50.452548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.360 ms 00:21:52.431 [2024-11-20 16:09:50.452558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.431 [2024-11-20 16:09:50.452676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.431 [2024-11-20 16:09:50.452686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.431 [2024-11-20 16:09:50.452696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:52.431 [2024-11-20 16:09:50.452703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.482789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.482917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.432 [2024-11-20 16:09:50.482935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.064 ms 00:21:52.432 [2024-11-20 16:09:50.482943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.483000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.483009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.432 [2024-11-20 16:09:50.483019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:52.432 [2024-11-20 16:09:50.483026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.483340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.483360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.432 [2024-11-20 16:09:50.483373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:52.432 [2024-11-20 16:09:50.483380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.483504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.483517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.432 [2024-11-20 16:09:50.483527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:52.432 [2024-11-20 16:09:50.483534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.497612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.497641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.432 [2024-11-20 16:09:50.497653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:21:52.432 [2024-11-20 16:09:50.497660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.519838] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:52.432 [2024-11-20 16:09:50.519880] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:52.432 [2024-11-20 16:09:50.519896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.519906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:52.432 [2024-11-20 16:09:50.519918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.110 ms 00:21:52.432 [2024-11-20 16:09:50.519926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.545065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.545100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:52.432 [2024-11-20 16:09:50.545113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 00:21:52.432 [2024-11-20 16:09:50.545122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.556518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.556549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:52.432 [2024-11-20 16:09:50.556563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.326 ms 00:21:52.432 [2024-11-20 16:09:50.556569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.567566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.567690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:52.432 [2024-11-20 16:09:50.567709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:21:52.432 [2024-11-20 16:09:50.567716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.568334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.568354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:52.432 [2024-11-20 16:09:50.568365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:21:52.432 [2024-11-20 16:09:50.568372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.621925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-11-20 16:09:50.622106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:52.432 [2024-11-20 16:09:50.622128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.528 ms 00:21:52.432 [2024-11-20 16:09:50.622136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-11-20 16:09:50.632501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:52.692 [2024-11-20 16:09:50.646202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.646247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:52.692 [2024-11-20 16:09:50.646262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:21:52.692 [2024-11-20 16:09:50.646271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.646356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.646368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:52.692 [2024-11-20 16:09:50.646376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:52.692 [2024-11-20 16:09:50.646386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.646431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.646441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:52.692 [2024-11-20 16:09:50.646449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:52.692 [2024-11-20 16:09:50.646461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.646484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.646494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:52.692 [2024-11-20 16:09:50.646502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:52.692 [2024-11-20 16:09:50.646513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.646541] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:52.692 [2024-11-20 16:09:50.646555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.646562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:52.692 [2024-11-20 16:09:50.646573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:52.692 [2024-11-20 16:09:50.646580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.669824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.669858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:52.692 [2024-11-20 16:09:50.669871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.218 ms 00:21:52.692 [2024-11-20 16:09:50.669880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.669971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.692 [2024-11-20 16:09:50.669981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:52.692 [2024-11-20 16:09:50.669991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:52.692 [2024-11-20 16:09:50.670001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.692 [2024-11-20 16:09:50.670872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.693 [2024-11-20 16:09:50.673718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.337 ms, result 0 00:21:52.693 [2024-11-20 16:09:50.674854] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.693 Some configs were skipped because the RPC state that can call them passed over. 00:21:52.693 16:09:50 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:52.950 [2024-11-20 16:09:50.941366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.950 [2024-11-20 16:09:50.941530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:52.950 [2024-11-20 16:09:50.941597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:21:52.950 [2024-11-20 16:09:50.941623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.950 [2024-11-20 16:09:50.941675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.792 ms, result 0 00:21:52.950 true 00:21:52.950 16:09:50 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:52.950 [2024-11-20 16:09:51.141174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.950 [2024-11-20 16:09:51.141329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:52.950 [2024-11-20 16:09:51.141393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:21:52.950 [2024-11-20 16:09:51.141416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.950 [2024-11-20 16:09:51.141469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.345 ms, result 0 00:21:52.950 true 00:21:52.950 16:09:51 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76697 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76697 ']' 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76697 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76697 00:21:52.950 killing process with pid 76697 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.950 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76697' 00:21:52.951 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76697 00:21:52.951 16:09:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76697 00:21:53.885 [2024-11-20 16:09:51.859041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.859090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:53.885 [2024-11-20 16:09:51.859103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:53.885 [2024-11-20 16:09:51.859112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.859137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:53.885 [2024-11-20 16:09:51.861707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.861741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:53.885 [2024-11-20 16:09:51.861756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:21:53.885 [2024-11-20 16:09:51.861765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.862064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.862074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:53.885 [2024-11-20 16:09:51.862084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:53.885 [2024-11-20 16:09:51.862092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.866173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.866198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:53.885 [2024-11-20 16:09:51.866211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:21:53.885 [2024-11-20 16:09:51.866218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.873304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.873434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:53.885 [2024-11-20 16:09:51.873452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.052 ms 00:21:53.885 [2024-11-20 16:09:51.873460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.882405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.882432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:53.885 [2024-11-20 16:09:51.882445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.888 ms 00:21:53.885 [2024-11-20 16:09:51.882458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.889528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.889557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:53.885 [2024-11-20 16:09:51.889569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.032 ms 00:21:53.885 [2024-11-20 16:09:51.889577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.889711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.889737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:53.885 [2024-11-20 16:09:51.889748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:53.885 [2024-11-20 16:09:51.889755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.899567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.899593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:53.885 [2024-11-20 16:09:51.899605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.789 ms 00:21:53.885 [2024-11-20 16:09:51.899613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.908771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.908796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:53.885 [2024-11-20 16:09:51.908809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.123 ms 00:21:53.885 [2024-11-20 16:09:51.908816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.918074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.918100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:53.885 [2024-11-20 16:09:51.918114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.221 ms 00:21:53.885 [2024-11-20 16:09:51.918121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.927065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.885 [2024-11-20 16:09:51.927091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:53.885 [2024-11-20 16:09:51.927101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.880 ms 00:21:53.885 [2024-11-20 16:09:51.927109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.885 [2024-11-20 16:09:51.927155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:53.885 [2024-11-20 16:09:51.927168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:53.885 [2024-11-20 16:09:51.927439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.927998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.928006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.928016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:53.886 [2024-11-20 16:09:51.928031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:53.886 [2024-11-20 16:09:51.928044] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:21:53.886 [2024-11-20 16:09:51.928056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:53.886 [2024-11-20 16:09:51.928067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:53.886 [2024-11-20 16:09:51.928074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:53.886 [2024-11-20 16:09:51.928083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:53.886 [2024-11-20 16:09:51.928090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:53.886 [2024-11-20 16:09:51.928099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:53.886 [2024-11-20 16:09:51.928106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:53.886 [2024-11-20 16:09:51.928114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:53.886 [2024-11-20 16:09:51.928120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:53.886 [2024-11-20 16:09:51.928129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.886 [2024-11-20 16:09:51.928137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:53.886 [2024-11-20 16:09:51.928147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:21:53.886 [2024-11-20 16:09:51.928155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.886 [2024-11-20 16:09:51.941246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.886 [2024-11-20 16:09:51.941281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:53.886 [2024-11-20 16:09:51.941298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.068 ms 00:21:53.886 [2024-11-20 16:09:51.941307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.886 [2024-11-20 16:09:51.941686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.886 [2024-11-20 16:09:51.941703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:53.886 [2024-11-20 16:09:51.941714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:21:53.886 [2024-11-20 16:09:51.941741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.886 [2024-11-20 16:09:51.985246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.886 [2024-11-20 16:09:51.985280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.886 [2024-11-20 16:09:51.985293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.886 [2024-11-20 16:09:51.985302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.886 [2024-11-20 16:09:51.985409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.886 [2024-11-20 16:09:51.985419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.886 [2024-11-20 16:09:51.985429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.886 [2024-11-20 16:09:51.985438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:51.985485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:51.985494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.887 [2024-11-20 16:09:51.985504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:51.985512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:51.985531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:51.985538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.887 [2024-11-20 16:09:51.985547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:51.985554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.062161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.062201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.887 [2024-11-20 16:09:52.062214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.062222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.887 [2024-11-20 16:09:52.126280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.887 [2024-11-20 16:09:52.126382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.887 [2024-11-20 16:09:52.126434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.887 [2024-11-20 16:09:52.126550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.887 [2024-11-20 16:09:52.126607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.887 [2024-11-20 16:09:52.126674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.887 [2024-11-20 16:09:52.126759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.887 [2024-11-20 16:09:52.126769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.887 [2024-11-20 16:09:52.126776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.887 [2024-11-20 16:09:52.126904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.840 ms, result 0 00:21:54.821 16:09:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:54.821 16:09:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:54.821 [2024-11-20 16:09:52.769421] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:54.821 [2024-11-20 16:09:52.769695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76750 ] 00:21:54.821 [2024-11-20 16:09:52.925928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.821 [2024-11-20 16:09:53.004895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.079 [2024-11-20 16:09:53.215957] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.079 [2024-11-20 16:09:53.216129] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.339 [2024-11-20 16:09:53.364250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.339 [2024-11-20 16:09:53.364289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.339 [2024-11-20 16:09:53.364299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:55.339 [2024-11-20 16:09:53.364306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.339 [2024-11-20 16:09:53.366388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.339 [2024-11-20 16:09:53.366417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.339 [2024-11-20 16:09:53.366425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:21:55.339 [2024-11-20 16:09:53.366431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.339 [2024-11-20 16:09:53.366489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.339 [2024-11-20 16:09:53.367260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.339 [2024-11-20 16:09:53.367326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.339 [2024-11-20 16:09:53.367334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.339 [2024-11-20 16:09:53.367342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:21:55.339 [2024-11-20 16:09:53.367348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.339 [2024-11-20 16:09:53.368491] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:55.339 [2024-11-20 16:09:53.378046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.339 [2024-11-20 16:09:53.378165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:55.339 [2024-11-20 16:09:53.378179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:21:55.340 [2024-11-20 16:09:53.378185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.378251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.378260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:55.340 [2024-11-20 16:09:53.378266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:55.340 [2024-11-20 16:09:53.378271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.382654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.382680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.340 [2024-11-20 16:09:53.382687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.354 ms 00:21:55.340 [2024-11-20 16:09:53.382692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.382781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.382789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.340 [2024-11-20 16:09:53.382796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:55.340 [2024-11-20 16:09:53.382801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.382823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.382831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.340 [2024-11-20 16:09:53.382837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:55.340 [2024-11-20 16:09:53.382842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.382858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:55.340 [2024-11-20 16:09:53.385455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.385559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.340 [2024-11-20 16:09:53.385571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:21:55.340 [2024-11-20 16:09:53.385576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.385606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.385613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.340 [2024-11-20 16:09:53.385619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:55.340 [2024-11-20 16:09:53.385625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.385638] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:55.340 [2024-11-20 16:09:53.385656] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:55.340 [2024-11-20 16:09:53.385683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:55.340 [2024-11-20 16:09:53.385695] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:55.340 [2024-11-20 16:09:53.385792] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.340 [2024-11-20 16:09:53.385802] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.340 [2024-11-20 16:09:53.385810] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:55.340 [2024-11-20 16:09:53.385818] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.340 [2024-11-20 16:09:53.385827] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.340 [2024-11-20 16:09:53.385833] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:55.340 [2024-11-20 16:09:53.385840] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.340 [2024-11-20 16:09:53.385845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.340 [2024-11-20 16:09:53.385851] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.340 [2024-11-20 16:09:53.385857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.385862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.340 [2024-11-20 16:09:53.385868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:21:55.340 [2024-11-20 16:09:53.385874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.385942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.340 [2024-11-20 16:09:53.385950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.340 [2024-11-20 16:09:53.385956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:55.340 [2024-11-20 16:09:53.385962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.340 [2024-11-20 16:09:53.386047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.340 [2024-11-20 16:09:53.386054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.340 [2024-11-20 16:09:53.386060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.340 [2024-11-20 16:09:53.386077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.340 [2024-11-20 16:09:53.386095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.340 [2024-11-20 16:09:53.386106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.340 [2024-11-20 16:09:53.386111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:55.340 [2024-11-20 16:09:53.386117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.340 [2024-11-20 16:09:53.386127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.340 [2024-11-20 16:09:53.386132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:55.340 [2024-11-20 16:09:53.386137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.340 [2024-11-20 16:09:53.386147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.340 [2024-11-20 16:09:53.386162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.340 [2024-11-20 16:09:53.386178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.340 [2024-11-20 16:09:53.386193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.340 [2024-11-20 16:09:53.386208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.340 [2024-11-20 16:09:53.386223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.340 [2024-11-20 16:09:53.386233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.340 [2024-11-20 16:09:53.386238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:55.340 [2024-11-20 16:09:53.386243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.340 [2024-11-20 16:09:53.386248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.340 [2024-11-20 16:09:53.386253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:55.340 [2024-11-20 16:09:53.386258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.340 [2024-11-20 16:09:53.386268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:55.340 [2024-11-20 16:09:53.386273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386278] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.340 [2024-11-20 16:09:53.386286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.340 [2024-11-20 16:09:53.386291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.340 [2024-11-20 16:09:53.386298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.340 [2024-11-20 16:09:53.386304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.341 [2024-11-20 16:09:53.386310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.341 [2024-11-20 16:09:53.386315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.341 [2024-11-20 16:09:53.386320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.341 [2024-11-20 16:09:53.386325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.341 [2024-11-20 16:09:53.386330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.341 [2024-11-20 16:09:53.386337] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.341 [2024-11-20 16:09:53.386344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:55.341 [2024-11-20 16:09:53.386355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:55.341 [2024-11-20 16:09:53.386361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:55.341 [2024-11-20 16:09:53.386366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:55.341 [2024-11-20 16:09:53.386372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:55.341 [2024-11-20 16:09:53.386377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:55.341 [2024-11-20 16:09:53.386382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:55.341 [2024-11-20 16:09:53.386388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:55.341 [2024-11-20 16:09:53.386393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:55.341 [2024-11-20 16:09:53.386399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:55.341 [2024-11-20 16:09:53.386426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.341 [2024-11-20 16:09:53.386432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.341 [2024-11-20 16:09:53.386444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.341 [2024-11-20 16:09:53.386450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.341 [2024-11-20 16:09:53.386455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.341 [2024-11-20 16:09:53.386461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.386468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.341 [2024-11-20 16:09:53.386476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:21:55.341 [2024-11-20 16:09:53.386481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.408282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.408690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.341 [2024-11-20 16:09:53.408768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.762 ms 00:21:55.341 [2024-11-20 16:09:53.408790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.408908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.408935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:55.341 [2024-11-20 16:09:53.408951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:55.341 [2024-11-20 16:09:53.408966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.449562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.449660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.341 [2024-11-20 16:09:53.449700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.571 ms 00:21:55.341 [2024-11-20 16:09:53.449720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.449788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.449808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.341 [2024-11-20 16:09:53.449824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:55.341 [2024-11-20 16:09:53.449838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.450133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.450162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.341 [2024-11-20 16:09:53.450178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:55.341 [2024-11-20 16:09:53.450192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.450307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.450324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.341 [2024-11-20 16:09:53.450340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:55.341 [2024-11-20 16:09:53.450354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.461221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.461306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.341 [2024-11-20 16:09:53.461344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.805 ms 00:21:55.341 [2024-11-20 16:09:53.461361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.470959] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:55.341 [2024-11-20 16:09:53.471062] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:55.341 [2024-11-20 16:09:53.471111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.471127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:55.341 [2024-11-20 16:09:53.471142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.653 ms 00:21:55.341 [2024-11-20 16:09:53.471157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.490109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.490205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:55.341 [2024-11-20 16:09:53.490247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.899 ms 00:21:55.341 [2024-11-20 16:09:53.490263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.499225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.499311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:55.341 [2024-11-20 16:09:53.499352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.902 ms 00:21:55.341 [2024-11-20 16:09:53.499369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.507812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.507894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:55.341 [2024-11-20 16:09:53.507934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.395 ms 00:21:55.341 [2024-11-20 16:09:53.507950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.508408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.508480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:55.341 [2024-11-20 16:09:53.508520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:21:55.341 [2024-11-20 16:09:53.508536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.552129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.552235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:55.341 [2024-11-20 16:09:53.552279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.565 ms 00:21:55.341 [2024-11-20 16:09:53.552297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.560233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:55.341 [2024-11-20 16:09:53.571995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.572097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:55.341 [2024-11-20 16:09:53.572134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.616 ms 00:21:55.341 [2024-11-20 16:09:53.572156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.572239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.572259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:55.341 [2024-11-20 16:09:53.572275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:55.341 [2024-11-20 16:09:53.572289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.572335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.572352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:55.341 [2024-11-20 16:09:53.572418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:55.341 [2024-11-20 16:09:53.572436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.341 [2024-11-20 16:09:53.572475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.341 [2024-11-20 16:09:53.572492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:55.341 [2024-11-20 16:09:53.572507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:55.342 [2024-11-20 16:09:53.572521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.342 [2024-11-20 16:09:53.572554] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:55.342 [2024-11-20 16:09:53.572607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.342 [2024-11-20 16:09:53.572625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:55.342 [2024-11-20 16:09:53.572640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:55.342 [2024-11-20 16:09:53.572669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.600 [2024-11-20 16:09:53.590636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.600 [2024-11-20 16:09:53.590736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:55.600 [2024-11-20 16:09:53.590777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.939 ms 00:21:55.600 [2024-11-20 16:09:53.590794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.600 [2024-11-20 16:09:53.590871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.600 [2024-11-20 16:09:53.591032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:55.600 [2024-11-20 16:09:53.591069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:55.600 [2024-11-20 16:09:53.591086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.600 [2024-11-20 16:09:53.591740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.600 [2024-11-20 16:09:53.594217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.256 ms, result 0 00:21:55.600 [2024-11-20 16:09:53.595226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.600 [2024-11-20 16:09:53.606163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.534  [2024-11-20T16:09:55.717Z] Copying: 48/256 [MB] (48 MBps) [2024-11-20T16:09:56.649Z] Copying: 91/256 [MB] (42 MBps) [2024-11-20T16:09:58.022Z] Copying: 135/256 [MB] (43 MBps) [2024-11-20T16:09:58.953Z] Copying: 178/256 [MB] (43 MBps) [2024-11-20T16:09:59.544Z] Copying: 223/256 [MB] (44 MBps) [2024-11-20T16:09:59.544Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-20 16:09:59.346832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.294 [2024-11-20 16:09:59.355929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.355964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:01.294 [2024-11-20 16:09:59.355978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.294 [2024-11-20 16:09:59.355992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.356013] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:01.294 [2024-11-20 16:09:59.358577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.358604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:01.294 [2024-11-20 16:09:59.358615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:22:01.294 [2024-11-20 16:09:59.358622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.358882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.358892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:01.294 [2024-11-20 16:09:59.358900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:22:01.294 [2024-11-20 16:09:59.358907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.362594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.362617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:01.294 [2024-11-20 16:09:59.362626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:22:01.294 [2024-11-20 16:09:59.362635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.369524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.369656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:01.294 [2024-11-20 16:09:59.369672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.873 ms 00:22:01.294 [2024-11-20 16:09:59.369679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.392042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.392169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:01.294 [2024-11-20 16:09:59.392184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.293 ms 00:22:01.294 [2024-11-20 16:09:59.392192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.405749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.405785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:01.294 [2024-11-20 16:09:59.405799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.535 ms 00:22:01.294 [2024-11-20 16:09:59.405806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.405938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.405948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:01.294 [2024-11-20 16:09:59.405957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:01.294 [2024-11-20 16:09:59.405964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.429004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.429032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:01.294 [2024-11-20 16:09:59.429042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.017 ms 00:22:01.294 [2024-11-20 16:09:59.429049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.451324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.451468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:01.294 [2024-11-20 16:09:59.451483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.254 ms 00:22:01.294 [2024-11-20 16:09:59.451490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.473415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.473442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:01.294 [2024-11-20 16:09:59.473451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.903 ms 00:22:01.294 [2024-11-20 16:09:59.473459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.495367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.294 [2024-11-20 16:09:59.495394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.294 [2024-11-20 16:09:59.495404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.863 ms 00:22:01.294 [2024-11-20 16:09:59.495411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.294 [2024-11-20 16:09:59.495430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.294 [2024-11-20 16:09:59.495443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.294 [2024-11-20 16:09:59.495720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.495996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.295 [2024-11-20 16:09:59.496478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.295 [2024-11-20 16:09:59.496486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:22:01.295 [2024-11-20 16:09:59.496493] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:01.295 [2024-11-20 16:09:59.496500] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:01.295 [2024-11-20 16:09:59.496507] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:01.295 [2024-11-20 16:09:59.496515] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:01.295 [2024-11-20 16:09:59.496522] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.295 [2024-11-20 16:09:59.496530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.295 [2024-11-20 16:09:59.496537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.295 [2024-11-20 16:09:59.496543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.295 [2024-11-20 16:09:59.496549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.295 [2024-11-20 16:09:59.496556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.295 [2024-11-20 16:09:59.496566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.295 [2024-11-20 16:09:59.496574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:22:01.295 [2024-11-20 16:09:59.496581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.296 [2024-11-20 16:09:59.508856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.296 [2024-11-20 16:09:59.508958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.296 [2024-11-20 16:09:59.509010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:22:01.296 [2024-11-20 16:09:59.509032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.296 [2024-11-20 16:09:59.509392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.296 [2024-11-20 16:09:59.509456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.296 [2024-11-20 16:09:59.509538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:22:01.296 [2024-11-20 16:09:59.509560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.544190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.544222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.555 [2024-11-20 16:09:59.544232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.544240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.544310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.544318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.555 [2024-11-20 16:09:59.544326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.544333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.544372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.544382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.555 [2024-11-20 16:09:59.544389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.544396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.544413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.544423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.555 [2024-11-20 16:09:59.544430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.544437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.620596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.620636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.555 [2024-11-20 16:09:59.620646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.620654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.683625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.683664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.555 [2024-11-20 16:09:59.683675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.683683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.683752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.683766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.555 [2024-11-20 16:09:59.683778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.683790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.683828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.683840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.555 [2024-11-20 16:09:59.683858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.683869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.683976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.683992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.555 [2024-11-20 16:09:59.684001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.684008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.684038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.684046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.555 [2024-11-20 16:09:59.684054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.684065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.684100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.684108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.555 [2024-11-20 16:09:59.684116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.684123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.684162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.555 [2024-11-20 16:09:59.684171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.555 [2024-11-20 16:09:59.684182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.555 [2024-11-20 16:09:59.684189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.555 [2024-11-20 16:09:59.684313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.376 ms, result 0 00:22:02.123 00:22:02.123 00:22:02.381 16:10:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:02.381 16:10:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:02.949 16:10:00 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:02.949 [2024-11-20 16:10:00.984780] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:02.949 [2024-11-20 16:10:00.985025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 00:22:02.949 [2024-11-20 16:10:01.145764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.207 [2024-11-20 16:10:01.245276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.466 [2024-11-20 16:10:01.501436] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.466 [2024-11-20 16:10:01.501494] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.466 [2024-11-20 16:10:01.655774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.466 [2024-11-20 16:10:01.655815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.466 [2024-11-20 16:10:01.655827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.466 [2024-11-20 16:10:01.655835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.466 [2024-11-20 16:10:01.658456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.466 [2024-11-20 16:10:01.658487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.467 [2024-11-20 16:10:01.658496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:22:03.467 [2024-11-20 16:10:01.658504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.658571] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.467 [2024-11-20 16:10:01.659284] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.467 [2024-11-20 16:10:01.659306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.659313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.467 [2024-11-20 16:10:01.659322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:22:03.467 [2024-11-20 16:10:01.659328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.660397] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:03.467 [2024-11-20 16:10:01.672409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.672440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:03.467 [2024-11-20 16:10:01.672451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.013 ms 00:22:03.467 [2024-11-20 16:10:01.672459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.672540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.672551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:03.467 [2024-11-20 16:10:01.672559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:03.467 [2024-11-20 16:10:01.672566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.677945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.677975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.467 [2024-11-20 16:10:01.677984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:22:03.467 [2024-11-20 16:10:01.677992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.678075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.678084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.467 [2024-11-20 16:10:01.678093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:03.467 [2024-11-20 16:10:01.678100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.678123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.678134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.467 [2024-11-20 16:10:01.678141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:03.467 [2024-11-20 16:10:01.678148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.678171] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:03.467 [2024-11-20 16:10:01.681465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.681492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.467 [2024-11-20 16:10:01.681501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:22:03.467 [2024-11-20 16:10:01.681509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.681542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.681551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.467 [2024-11-20 16:10:01.681559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:03.467 [2024-11-20 16:10:01.681566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.681584] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:03.467 [2024-11-20 16:10:01.681602] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:03.467 [2024-11-20 16:10:01.681635] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:03.467 [2024-11-20 16:10:01.681649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:03.467 [2024-11-20 16:10:01.681760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.467 [2024-11-20 16:10:01.681771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.467 [2024-11-20 16:10:01.681781] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.467 [2024-11-20 16:10:01.681791] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.467 [2024-11-20 16:10:01.681803] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.467 [2024-11-20 16:10:01.681811] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:03.467 [2024-11-20 16:10:01.681818] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.467 [2024-11-20 16:10:01.681825] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.467 [2024-11-20 16:10:01.681833] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.467 [2024-11-20 16:10:01.681840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.681847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.467 [2024-11-20 16:10:01.681855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:22:03.467 [2024-11-20 16:10:01.681861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.681949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.467 [2024-11-20 16:10:01.681959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.467 [2024-11-20 16:10:01.681967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:03.467 [2024-11-20 16:10:01.681973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.467 [2024-11-20 16:10:01.682090] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.467 [2024-11-20 16:10:01.682106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.467 [2024-11-20 16:10:01.682114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.467 [2024-11-20 16:10:01.682137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.467 [2024-11-20 16:10:01.682157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.467 [2024-11-20 16:10:01.682170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.467 [2024-11-20 16:10:01.682177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:03.467 [2024-11-20 16:10:01.682183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.467 [2024-11-20 16:10:01.682195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.467 [2024-11-20 16:10:01.682202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:03.467 [2024-11-20 16:10:01.682208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.467 [2024-11-20 16:10:01.682224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.467 [2024-11-20 16:10:01.682243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.467 [2024-11-20 16:10:01.682263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.467 [2024-11-20 16:10:01.682283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.467 [2024-11-20 16:10:01.682295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.467 [2024-11-20 16:10:01.682302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:03.467 [2024-11-20 16:10:01.682309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.468 [2024-11-20 16:10:01.682315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.468 [2024-11-20 16:10:01.682321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:03.468 [2024-11-20 16:10:01.682327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.468 [2024-11-20 16:10:01.682334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.468 [2024-11-20 16:10:01.682340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:03.468 [2024-11-20 16:10:01.682346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.468 [2024-11-20 16:10:01.682353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.468 [2024-11-20 16:10:01.682359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:03.468 [2024-11-20 16:10:01.682365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.468 [2024-11-20 16:10:01.682371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.468 [2024-11-20 16:10:01.682378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:03.468 [2024-11-20 16:10:01.682384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.468 [2024-11-20 16:10:01.682390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.468 [2024-11-20 16:10:01.682398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.468 [2024-11-20 16:10:01.682405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.468 [2024-11-20 16:10:01.682413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.468 [2024-11-20 16:10:01.682421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.468 [2024-11-20 16:10:01.682428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.468 [2024-11-20 16:10:01.682434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.468 [2024-11-20 16:10:01.682441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.468 [2024-11-20 16:10:01.682447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.468 [2024-11-20 16:10:01.682454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.468 [2024-11-20 16:10:01.682462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.468 [2024-11-20 16:10:01.682471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:03.468 [2024-11-20 16:10:01.682486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:03.468 [2024-11-20 16:10:01.682492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:03.468 [2024-11-20 16:10:01.682500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:03.468 [2024-11-20 16:10:01.682506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:03.468 [2024-11-20 16:10:01.682513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:03.468 [2024-11-20 16:10:01.682520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:03.468 [2024-11-20 16:10:01.682527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:03.468 [2024-11-20 16:10:01.682534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:03.468 [2024-11-20 16:10:01.682541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:03.468 [2024-11-20 16:10:01.682576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.468 [2024-11-20 16:10:01.682584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.468 [2024-11-20 16:10:01.682598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.468 [2024-11-20 16:10:01.682605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.468 [2024-11-20 16:10:01.682612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.468 [2024-11-20 16:10:01.682619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.468 [2024-11-20 16:10:01.682626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.468 [2024-11-20 16:10:01.682635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:22:03.468 [2024-11-20 16:10:01.682642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.468 [2024-11-20 16:10:01.708088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.468 [2024-11-20 16:10:01.708129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.468 [2024-11-20 16:10:01.708140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:22:03.468 [2024-11-20 16:10:01.708148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.468 [2024-11-20 16:10:01.708277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.468 [2024-11-20 16:10:01.708295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.468 [2024-11-20 16:10:01.708304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:03.468 [2024-11-20 16:10:01.708311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.727 [2024-11-20 16:10:01.757389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.727 [2024-11-20 16:10:01.757433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.727 [2024-11-20 16:10:01.757445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.055 ms 00:22:03.727 [2024-11-20 16:10:01.757455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.727 [2024-11-20 16:10:01.757556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.727 [2024-11-20 16:10:01.757568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.727 [2024-11-20 16:10:01.757577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:03.727 [2024-11-20 16:10:01.757584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.727 [2024-11-20 16:10:01.757915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.727 [2024-11-20 16:10:01.757938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.727 [2024-11-20 16:10:01.757947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:03.727 [2024-11-20 16:10:01.757960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.727 [2024-11-20 16:10:01.758084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.727 [2024-11-20 16:10:01.758100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.727 [2024-11-20 16:10:01.758108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:03.727 [2024-11-20 16:10:01.758115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.727 [2024-11-20 16:10:01.771213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.727 [2024-11-20 16:10:01.771243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.727 [2024-11-20 16:10:01.771253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.077 ms 00:22:03.728 [2024-11-20 16:10:01.771260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.783441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:03.728 [2024-11-20 16:10:01.783476] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.728 [2024-11-20 16:10:01.783487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.783495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.728 [2024-11-20 16:10:01.783504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.131 ms 00:22:03.728 [2024-11-20 16:10:01.783512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.807465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.807506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.728 [2024-11-20 16:10:01.807517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.883 ms 00:22:03.728 [2024-11-20 16:10:01.807525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.818656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.818688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.728 [2024-11-20 16:10:01.818698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.064 ms 00:22:03.728 [2024-11-20 16:10:01.818705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.829679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.829709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.728 [2024-11-20 16:10:01.829719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.901 ms 00:22:03.728 [2024-11-20 16:10:01.829733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.830341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.830367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.728 [2024-11-20 16:10:01.830376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:22:03.728 [2024-11-20 16:10:01.830383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.884192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.884240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.728 [2024-11-20 16:10:01.884252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.786 ms 00:22:03.728 [2024-11-20 16:10:01.884260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.894418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:03.728 [2024-11-20 16:10:01.907772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.907805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.728 [2024-11-20 16:10:01.907818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.417 ms 00:22:03.728 [2024-11-20 16:10:01.907829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.907900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.907910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.728 [2024-11-20 16:10:01.907919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:03.728 [2024-11-20 16:10:01.907926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.907970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.907978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.728 [2024-11-20 16:10:01.907986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:03.728 [2024-11-20 16:10:01.907996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.908027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.908036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.728 [2024-11-20 16:10:01.908044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:03.728 [2024-11-20 16:10:01.908051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.908077] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.728 [2024-11-20 16:10:01.908086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.908093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.728 [2024-11-20 16:10:01.908101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:03.728 [2024-11-20 16:10:01.908108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.930555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.930589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.728 [2024-11-20 16:10:01.930600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.426 ms 00:22:03.728 [2024-11-20 16:10:01.930608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.930697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.728 [2024-11-20 16:10:01.930708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.728 [2024-11-20 16:10:01.930716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:03.728 [2024-11-20 16:10:01.930737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.728 [2024-11-20 16:10:01.931836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.728 [2024-11-20 16:10:01.934875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.777 ms, result 0 00:22:03.728 [2024-11-20 16:10:01.935478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.728 [2024-11-20 16:10:01.948411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.987  [2024-11-20T16:10:02.237Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-11-20 16:10:02.046359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.987 [2024-11-20 16:10:02.055121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.987 [2024-11-20 16:10:02.055156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:03.987 [2024-11-20 16:10:02.055172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:03.987 [2024-11-20 16:10:02.055180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.987 [2024-11-20 16:10:02.055200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:03.988 [2024-11-20 16:10:02.057756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.057785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:03.988 [2024-11-20 16:10:02.057795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:22:03.988 [2024-11-20 16:10:02.057804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.059510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.059544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:03.988 [2024-11-20 16:10:02.059554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:22:03.988 [2024-11-20 16:10:02.059561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.063549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.063573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:03.988 [2024-11-20 16:10:02.063582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.967 ms 00:22:03.988 [2024-11-20 16:10:02.063589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.070529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.070558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:03.988 [2024-11-20 16:10:02.070568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:22:03.988 [2024-11-20 16:10:02.070576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.092886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.092915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:03.988 [2024-11-20 16:10:02.092925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:22:03.988 [2024-11-20 16:10:02.092932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.106802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.106836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.988 [2024-11-20 16:10:02.106847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.836 ms 00:22:03.988 [2024-11-20 16:10:02.106856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.106985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.106994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.988 [2024-11-20 16:10:02.107002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:03.988 [2024-11-20 16:10:02.107009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.129508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.129537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:03.988 [2024-11-20 16:10:02.129547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.477 ms 00:22:03.988 [2024-11-20 16:10:02.129554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.151630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.151659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:03.988 [2024-11-20 16:10:02.151669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.043 ms 00:22:03.988 [2024-11-20 16:10:02.151677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.173338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.173369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.988 [2024-11-20 16:10:02.173378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.629 ms 00:22:03.988 [2024-11-20 16:10:02.173386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.195066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.988 [2024-11-20 16:10:02.195096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.988 [2024-11-20 16:10:02.195106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.624 ms 00:22:03.988 [2024-11-20 16:10:02.195112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.988 [2024-11-20 16:10:02.195144] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.988 [2024-11-20 16:10:02.195158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.988 [2024-11-20 16:10:02.195364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.989 [2024-11-20 16:10:02.195836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.990 [2024-11-20 16:10:02.195928] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.990 [2024-11-20 16:10:02.195936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:22:03.990 [2024-11-20 16:10:02.195943] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.990 [2024-11-20 16:10:02.195951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.990 [2024-11-20 16:10:02.195958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.990 [2024-11-20 16:10:02.195965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.990 [2024-11-20 16:10:02.195972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.990 [2024-11-20 16:10:02.195979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.990 [2024-11-20 16:10:02.195989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.990 [2024-11-20 16:10:02.195995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.990 [2024-11-20 16:10:02.196001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.990 [2024-11-20 16:10:02.196008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.990 [2024-11-20 16:10:02.196016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.990 [2024-11-20 16:10:02.196024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:22:03.990 [2024-11-20 16:10:02.196030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.990 [2024-11-20 16:10:02.208263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.990 [2024-11-20 16:10:02.208292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.990 [2024-11-20 16:10:02.208302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.216 ms 00:22:03.990 [2024-11-20 16:10:02.208309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.990 [2024-11-20 16:10:02.208677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.990 [2024-11-20 16:10:02.208687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.990 [2024-11-20 16:10:02.208695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:22:03.990 [2024-11-20 16:10:02.208702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.243036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.243068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.248 [2024-11-20 16:10:02.243078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.243089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.243155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.243162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.248 [2024-11-20 16:10:02.243170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.243177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.243213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.243222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.248 [2024-11-20 16:10:02.243230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.243237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.243256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.243264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.248 [2024-11-20 16:10:02.243272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.243279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.318131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.318172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.248 [2024-11-20 16:10:02.318182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.318194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.379565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.379605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.248 [2024-11-20 16:10:02.379614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.379622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.379665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.248 [2024-11-20 16:10:02.379673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.248 [2024-11-20 16:10:02.379681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.248 [2024-11-20 16:10:02.379688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.248 [2024-11-20 16:10:02.379715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.249 [2024-11-20 16:10:02.379743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.249 [2024-11-20 16:10:02.379750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.249 [2024-11-20 16:10:02.379758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.249 [2024-11-20 16:10:02.379843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.249 [2024-11-20 16:10:02.379852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.249 [2024-11-20 16:10:02.379860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.249 [2024-11-20 16:10:02.379867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.249 [2024-11-20 16:10:02.379894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.249 [2024-11-20 16:10:02.379903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.249 [2024-11-20 16:10:02.379913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.249 [2024-11-20 16:10:02.379920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.249 [2024-11-20 16:10:02.379955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.249 [2024-11-20 16:10:02.379964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.249 [2024-11-20 16:10:02.379971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.249 [2024-11-20 16:10:02.379978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.249 [2024-11-20 16:10:02.380017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.249 [2024-11-20 16:10:02.380030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.249 [2024-11-20 16:10:02.380037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.249 [2024-11-20 16:10:02.380044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.249 [2024-11-20 16:10:02.380168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.039 ms, result 0 00:22:04.815 00:22:04.815 00:22:05.073 16:10:03 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76864 00:22:05.073 16:10:03 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76864 00:22:05.073 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76864 ']' 00:22:05.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.073 16:10:03 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:05.073 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.073 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.073 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.073 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.074 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:05.074 [2024-11-20 16:10:03.148978] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:05.074 [2024-11-20 16:10:03.149139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76864 ] 00:22:05.074 [2024-11-20 16:10:03.307902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.331 [2024-11-20 16:10:03.404003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.898 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.898 16:10:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:05.898 16:10:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:06.158 [2024-11-20 16:10:04.191365] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.158 [2024-11-20 16:10:04.191425] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.158 [2024-11-20 16:10:04.362322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.362368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.158 [2024-11-20 16:10:04.362382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.158 [2024-11-20 16:10:04.362390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.364970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.365006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.158 [2024-11-20 16:10:04.365016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:22:06.158 [2024-11-20 16:10:04.365024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.365093] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.158 [2024-11-20 16:10:04.365812] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.158 [2024-11-20 16:10:04.365840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.365848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.158 [2024-11-20 16:10:04.365859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:22:06.158 [2024-11-20 16:10:04.365866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.367216] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.158 [2024-11-20 16:10:04.379459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.379498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.158 [2024-11-20 16:10:04.379517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.249 ms 00:22:06.158 [2024-11-20 16:10:04.379526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.379611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.379624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.158 [2024-11-20 16:10:04.379633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:06.158 [2024-11-20 16:10:04.379641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.384325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.384362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.158 [2024-11-20 16:10:04.384373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:22:06.158 [2024-11-20 16:10:04.384381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.384482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.384494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.158 [2024-11-20 16:10:04.384503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:06.158 [2024-11-20 16:10:04.384511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.384538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.384547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.158 [2024-11-20 16:10:04.384555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:06.158 [2024-11-20 16:10:04.384563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.384586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:06.158 [2024-11-20 16:10:04.387783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.387811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.158 [2024-11-20 16:10:04.387822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:22:06.158 [2024-11-20 16:10:04.387830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.387866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.387873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.158 [2024-11-20 16:10:04.387883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.158 [2024-11-20 16:10:04.387892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.387912] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.158 [2024-11-20 16:10:04.387928] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.158 [2024-11-20 16:10:04.387969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.158 [2024-11-20 16:10:04.387984] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:06.158 [2024-11-20 16:10:04.388088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.158 [2024-11-20 16:10:04.388105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.158 [2024-11-20 16:10:04.388121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:06.158 [2024-11-20 16:10:04.388131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.158 [2024-11-20 16:10:04.388142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.158 [2024-11-20 16:10:04.388149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:06.158 [2024-11-20 16:10:04.388158] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.158 [2024-11-20 16:10:04.388166] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.158 [2024-11-20 16:10:04.388176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.158 [2024-11-20 16:10:04.388183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.388192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.158 [2024-11-20 16:10:04.388200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:22:06.158 [2024-11-20 16:10:04.388208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.388297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.158 [2024-11-20 16:10:04.388312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.158 [2024-11-20 16:10:04.388320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:06.158 [2024-11-20 16:10:04.388329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.158 [2024-11-20 16:10:04.388428] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.158 [2024-11-20 16:10:04.388439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.159 [2024-11-20 16:10:04.388447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.159 [2024-11-20 16:10:04.388472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.159 [2024-11-20 16:10:04.388497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.159 [2024-11-20 16:10:04.388512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.159 [2024-11-20 16:10:04.388520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:06.159 [2024-11-20 16:10:04.388526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.159 [2024-11-20 16:10:04.388534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.159 [2024-11-20 16:10:04.388540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:06.159 [2024-11-20 16:10:04.388548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.159 [2024-11-20 16:10:04.388563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.159 [2024-11-20 16:10:04.388589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.159 [2024-11-20 16:10:04.388614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.159 [2024-11-20 16:10:04.388647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.159 [2024-11-20 16:10:04.388670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.159 [2024-11-20 16:10:04.388691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.159 [2024-11-20 16:10:04.388707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.159 [2024-11-20 16:10:04.388715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:06.159 [2024-11-20 16:10:04.388733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.159 [2024-11-20 16:10:04.388742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.159 [2024-11-20 16:10:04.388748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:06.159 [2024-11-20 16:10:04.388758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.159 [2024-11-20 16:10:04.388773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:06.159 [2024-11-20 16:10:04.388780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.159 [2024-11-20 16:10:04.388797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.159 [2024-11-20 16:10:04.388805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.159 [2024-11-20 16:10:04.388821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.159 [2024-11-20 16:10:04.388828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.159 [2024-11-20 16:10:04.388836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.159 [2024-11-20 16:10:04.388843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.159 [2024-11-20 16:10:04.388850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.159 [2024-11-20 16:10:04.388857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.159 [2024-11-20 16:10:04.388866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.159 [2024-11-20 16:10:04.388876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.388887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:06.159 [2024-11-20 16:10:04.388894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:06.159 [2024-11-20 16:10:04.388904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:06.159 [2024-11-20 16:10:04.388911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:06.159 [2024-11-20 16:10:04.388920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:06.159 [2024-11-20 16:10:04.388928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:06.159 [2024-11-20 16:10:04.388936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:06.159 [2024-11-20 16:10:04.388943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:06.159 [2024-11-20 16:10:04.388951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:06.159 [2024-11-20 16:10:04.388958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.388967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.388974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.388982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.388989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:06.159 [2024-11-20 16:10:04.388998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.159 [2024-11-20 16:10:04.389006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.389016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.159 [2024-11-20 16:10:04.389023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.159 [2024-11-20 16:10:04.389032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.159 [2024-11-20 16:10:04.389038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.159 [2024-11-20 16:10:04.389048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.159 [2024-11-20 16:10:04.389055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.159 [2024-11-20 16:10:04.389064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:22:06.159 [2024-11-20 16:10:04.389070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.414397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.414429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.419 [2024-11-20 16:10:04.414441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.256 ms 00:22:06.419 [2024-11-20 16:10:04.414450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.414563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.414573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.419 [2024-11-20 16:10:04.414582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:06.419 [2024-11-20 16:10:04.414589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.444694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.444735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.419 [2024-11-20 16:10:04.444746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.083 ms 00:22:06.419 [2024-11-20 16:10:04.444754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.444808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.444817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.419 [2024-11-20 16:10:04.444827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:06.419 [2024-11-20 16:10:04.444834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.445142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.445163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.419 [2024-11-20 16:10:04.445177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:22:06.419 [2024-11-20 16:10:04.445184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.445305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.445319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.419 [2024-11-20 16:10:04.445328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:06.419 [2024-11-20 16:10:04.445336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.459366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.459395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.419 [2024-11-20 16:10:04.459406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.007 ms 00:22:06.419 [2024-11-20 16:10:04.459413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.481863] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:06.419 [2024-11-20 16:10:04.481904] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.419 [2024-11-20 16:10:04.481921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.481930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.419 [2024-11-20 16:10:04.481943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.399 ms 00:22:06.419 [2024-11-20 16:10:04.481951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.506973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.507011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.419 [2024-11-20 16:10:04.507023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.956 ms 00:22:06.419 [2024-11-20 16:10:04.507031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.518342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.518370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.419 [2024-11-20 16:10:04.518383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.254 ms 00:22:06.419 [2024-11-20 16:10:04.518390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.529205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.529234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.419 [2024-11-20 16:10:04.529245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.752 ms 00:22:06.419 [2024-11-20 16:10:04.529252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.529877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.529902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.419 [2024-11-20 16:10:04.529912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:22:06.419 [2024-11-20 16:10:04.529919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.583905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.583951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.419 [2024-11-20 16:10:04.583966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.960 ms 00:22:06.419 [2024-11-20 16:10:04.583973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.594501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.419 [2024-11-20 16:10:04.607920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.607959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.419 [2024-11-20 16:10:04.607972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:22:06.419 [2024-11-20 16:10:04.607981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.608049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.608061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.419 [2024-11-20 16:10:04.608069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.419 [2024-11-20 16:10:04.608078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.608124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.608135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.419 [2024-11-20 16:10:04.608142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:06.419 [2024-11-20 16:10:04.608153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.608176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.608185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.419 [2024-11-20 16:10:04.608193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.419 [2024-11-20 16:10:04.608203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.608231] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.419 [2024-11-20 16:10:04.608244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.608251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.419 [2024-11-20 16:10:04.608262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.419 [2024-11-20 16:10:04.608269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.419 [2024-11-20 16:10:04.631266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.419 [2024-11-20 16:10:04.631298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.419 [2024-11-20 16:10:04.631311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.972 ms 00:22:06.420 [2024-11-20 16:10:04.631318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.420 [2024-11-20 16:10:04.631402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.420 [2024-11-20 16:10:04.631413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.420 [2024-11-20 16:10:04.631423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:06.420 [2024-11-20 16:10:04.631432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.420 [2024-11-20 16:10:04.632614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.420 [2024-11-20 16:10:04.635507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.022 ms, result 0 00:22:06.420 [2024-11-20 16:10:04.636314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.420 Some configs were skipped because the RPC state that can call them passed over. 00:22:06.677 16:10:04 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:06.677 [2024-11-20 16:10:04.866606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.677 [2024-11-20 16:10:04.866652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:06.677 [2024-11-20 16:10:04.866664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:22:06.677 [2024-11-20 16:10:04.866674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.678 [2024-11-20 16:10:04.866707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.584 ms, result 0 00:22:06.678 true 00:22:06.678 16:10:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:06.937 [2024-11-20 16:10:05.062494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.937 [2024-11-20 16:10:05.062532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:06.937 [2024-11-20 16:10:05.062545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:22:06.937 [2024-11-20 16:10:05.062552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.937 [2024-11-20 16:10:05.062587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.233 ms, result 0 00:22:06.937 true 00:22:06.937 16:10:05 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76864 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76864 ']' 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76864 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76864 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.937 killing process with pid 76864 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76864' 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76864 00:22:06.937 16:10:05 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76864 00:22:07.873 [2024-11-20 16:10:05.760020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.760072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.873 [2024-11-20 16:10:05.760083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.873 [2024-11-20 16:10:05.760091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.760109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:07.873 [2024-11-20 16:10:05.762256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.762284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.873 [2024-11-20 16:10:05.762295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:22:07.873 [2024-11-20 16:10:05.762302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.762529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.762541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.873 [2024-11-20 16:10:05.762550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:22:07.873 [2024-11-20 16:10:05.762556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.765683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.765708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.873 [2024-11-20 16:10:05.765719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:22:07.873 [2024-11-20 16:10:05.765732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.771545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.771571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:07.873 [2024-11-20 16:10:05.771580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.783 ms 00:22:07.873 [2024-11-20 16:10:05.771585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.778855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.778881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:07.873 [2024-11-20 16:10:05.778892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.224 ms 00:22:07.873 [2024-11-20 16:10:05.778904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.785392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.785423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:07.873 [2024-11-20 16:10:05.785432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.455 ms 00:22:07.873 [2024-11-20 16:10:05.785439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.785536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.785544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:07.873 [2024-11-20 16:10:05.785552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:07.873 [2024-11-20 16:10:05.785559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.793439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.793473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:07.873 [2024-11-20 16:10:05.793482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.863 ms 00:22:07.873 [2024-11-20 16:10:05.793488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.800779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.800805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:07.873 [2024-11-20 16:10:05.800816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:22:07.873 [2024-11-20 16:10:05.800821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.808159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.808185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:07.873 [2024-11-20 16:10:05.808196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.299 ms 00:22:07.873 [2024-11-20 16:10:05.808201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.815323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.873 [2024-11-20 16:10:05.815348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:07.873 [2024-11-20 16:10:05.815356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.071 ms 00:22:07.873 [2024-11-20 16:10:05.815361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.873 [2024-11-20 16:10:05.815388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:07.873 [2024-11-20 16:10:05.815399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:07.873 [2024-11-20 16:10:05.815658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.815999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:07.874 [2024-11-20 16:10:05.816058] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:07.874 [2024-11-20 16:10:05.816068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:22:07.874 [2024-11-20 16:10:05.816077] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:07.874 [2024-11-20 16:10:05.816086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:07.874 [2024-11-20 16:10:05.816091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:07.874 [2024-11-20 16:10:05.816098] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:07.874 [2024-11-20 16:10:05.816103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:07.874 [2024-11-20 16:10:05.816110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:07.874 [2024-11-20 16:10:05.816116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:07.874 [2024-11-20 16:10:05.816122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:07.874 [2024-11-20 16:10:05.816126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:07.874 [2024-11-20 16:10:05.816133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.874 [2024-11-20 16:10:05.816138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:07.874 [2024-11-20 16:10:05.816146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:22:07.874 [2024-11-20 16:10:05.816151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.825970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.874 [2024-11-20 16:10:05.825995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:07.874 [2024-11-20 16:10:05.826007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.801 ms 00:22:07.874 [2024-11-20 16:10:05.826014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.826315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.874 [2024-11-20 16:10:05.826327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:07.874 [2024-11-20 16:10:05.826336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:07.874 [2024-11-20 16:10:05.826343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.860761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.874 [2024-11-20 16:10:05.860787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.874 [2024-11-20 16:10:05.860796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.874 [2024-11-20 16:10:05.860802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.860874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.874 [2024-11-20 16:10:05.860882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.874 [2024-11-20 16:10:05.860889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.874 [2024-11-20 16:10:05.860897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.860930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.874 [2024-11-20 16:10:05.860937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.874 [2024-11-20 16:10:05.860946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.874 [2024-11-20 16:10:05.860951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.874 [2024-11-20 16:10:05.860967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.874 [2024-11-20 16:10:05.860973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.874 [2024-11-20 16:10:05.860981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.874 [2024-11-20 16:10:05.860986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.919976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.920010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.875 [2024-11-20 16:10:05.920020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.920026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.875 [2024-11-20 16:10:05.968532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.875 [2024-11-20 16:10:05.968621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.875 [2024-11-20 16:10:05.968666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.875 [2024-11-20 16:10:05.968779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:07.875 [2024-11-20 16:10:05.968825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.875 [2024-11-20 16:10:05.968878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.968918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.875 [2024-11-20 16:10:05.968925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.875 [2024-11-20 16:10:05.968932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.875 [2024-11-20 16:10:05.968938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.875 [2024-11-20 16:10:05.969044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.006 ms, result 0 00:22:08.442 16:10:06 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:08.442 [2024-11-20 16:10:06.552934] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:08.442 [2024-11-20 16:10:06.553350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76911 ] 00:22:08.699 [2024-11-20 16:10:06.708402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.699 [2024-11-20 16:10:06.784769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.959 [2024-11-20 16:10:06.995235] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.959 [2024-11-20 16:10:06.995285] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.959 [2024-11-20 16:10:07.146965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.147003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.959 [2024-11-20 16:10:07.147013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.959 [2024-11-20 16:10:07.147019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.149051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.149082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.959 [2024-11-20 16:10:07.149089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:22:08.959 [2024-11-20 16:10:07.149095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.149150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.959 [2024-11-20 16:10:07.149688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.959 [2024-11-20 16:10:07.149710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.149716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.959 [2024-11-20 16:10:07.149732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:08.959 [2024-11-20 16:10:07.149738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.150687] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.959 [2024-11-20 16:10:07.160151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.160182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.959 [2024-11-20 16:10:07.160191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.465 ms 00:22:08.959 [2024-11-20 16:10:07.160197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.160266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.160275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.959 [2024-11-20 16:10:07.160281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:08.959 [2024-11-20 16:10:07.160287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.164593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.164617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.959 [2024-11-20 16:10:07.164625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.278 ms 00:22:08.959 [2024-11-20 16:10:07.164630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.164699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.164707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.959 [2024-11-20 16:10:07.164713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:08.959 [2024-11-20 16:10:07.164719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.164752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.164760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.959 [2024-11-20 16:10:07.164767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:08.959 [2024-11-20 16:10:07.164772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.164787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:08.959 [2024-11-20 16:10:07.167554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.167579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.959 [2024-11-20 16:10:07.167589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:22:08.959 [2024-11-20 16:10:07.167595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.167626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.167633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.959 [2024-11-20 16:10:07.167640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:08.959 [2024-11-20 16:10:07.167645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.167658] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.959 [2024-11-20 16:10:07.167673] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:08.959 [2024-11-20 16:10:07.167699] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.959 [2024-11-20 16:10:07.167710] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:08.959 [2024-11-20 16:10:07.167796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:08.959 [2024-11-20 16:10:07.167805] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.959 [2024-11-20 16:10:07.167813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:08.959 [2024-11-20 16:10:07.167821] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.959 [2024-11-20 16:10:07.167830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.959 [2024-11-20 16:10:07.167836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:08.959 [2024-11-20 16:10:07.167842] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.959 [2024-11-20 16:10:07.167848] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:08.959 [2024-11-20 16:10:07.167853] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:08.959 [2024-11-20 16:10:07.167859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.167865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.959 [2024-11-20 16:10:07.167871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:22:08.959 [2024-11-20 16:10:07.167876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.167942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.959 [2024-11-20 16:10:07.167951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.959 [2024-11-20 16:10:07.167956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:08.959 [2024-11-20 16:10:07.167962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.959 [2024-11-20 16:10:07.168035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.959 [2024-11-20 16:10:07.168047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.959 [2024-11-20 16:10:07.168053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.959 [2024-11-20 16:10:07.168059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.959 [2024-11-20 16:10:07.168065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.959 [2024-11-20 16:10:07.168070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.959 [2024-11-20 16:10:07.168076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:08.959 [2024-11-20 16:10:07.168081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.959 [2024-11-20 16:10:07.168086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:08.959 [2024-11-20 16:10:07.168092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.959 [2024-11-20 16:10:07.168097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.959 [2024-11-20 16:10:07.168102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:08.959 [2024-11-20 16:10:07.168106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.959 [2024-11-20 16:10:07.168116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.959 [2024-11-20 16:10:07.168122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:08.959 [2024-11-20 16:10:07.168127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.959 [2024-11-20 16:10:07.168132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.960 [2024-11-20 16:10:07.168137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.960 [2024-11-20 16:10:07.168152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.960 [2024-11-20 16:10:07.168166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.960 [2024-11-20 16:10:07.168180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.960 [2024-11-20 16:10:07.168194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.960 [2024-11-20 16:10:07.168209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.960 [2024-11-20 16:10:07.168219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.960 [2024-11-20 16:10:07.168223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:08.960 [2024-11-20 16:10:07.168228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.960 [2024-11-20 16:10:07.168233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:08.960 [2024-11-20 16:10:07.168238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:08.960 [2024-11-20 16:10:07.168242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:08.960 [2024-11-20 16:10:07.168252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:08.960 [2024-11-20 16:10:07.168256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168261] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.960 [2024-11-20 16:10:07.168267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.960 [2024-11-20 16:10:07.168273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.960 [2024-11-20 16:10:07.168287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.960 [2024-11-20 16:10:07.168292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.960 [2024-11-20 16:10:07.168297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.960 [2024-11-20 16:10:07.168302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.960 [2024-11-20 16:10:07.168307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.960 [2024-11-20 16:10:07.168312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.960 [2024-11-20 16:10:07.168318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.960 [2024-11-20 16:10:07.168325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:08.960 [2024-11-20 16:10:07.168336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:08.960 [2024-11-20 16:10:07.168342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:08.960 [2024-11-20 16:10:07.168347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:08.960 [2024-11-20 16:10:07.168353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:08.960 [2024-11-20 16:10:07.168358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:08.960 [2024-11-20 16:10:07.168363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:08.960 [2024-11-20 16:10:07.168368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:08.960 [2024-11-20 16:10:07.168373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:08.960 [2024-11-20 16:10:07.168379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:08.960 [2024-11-20 16:10:07.168405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.960 [2024-11-20 16:10:07.168411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.960 [2024-11-20 16:10:07.168422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.960 [2024-11-20 16:10:07.168428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.960 [2024-11-20 16:10:07.168433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.960 [2024-11-20 16:10:07.168438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.960 [2024-11-20 16:10:07.168444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.960 [2024-11-20 16:10:07.168451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:08.960 [2024-11-20 16:10:07.168461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.960 [2024-11-20 16:10:07.189169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.960 [2024-11-20 16:10:07.189196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.960 [2024-11-20 16:10:07.189204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.670 ms 00:22:08.960 [2024-11-20 16:10:07.189210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.960 [2024-11-20 16:10:07.189300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.960 [2024-11-20 16:10:07.189311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.960 [2024-11-20 16:10:07.189317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:08.960 [2024-11-20 16:10:07.189322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.229205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.229241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.218 [2024-11-20 16:10:07.229251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.866 ms 00:22:09.218 [2024-11-20 16:10:07.229259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.229322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.229332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.218 [2024-11-20 16:10:07.229339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:09.218 [2024-11-20 16:10:07.229345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.229640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.229651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.218 [2024-11-20 16:10:07.229659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:09.218 [2024-11-20 16:10:07.229665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.229787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.229815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.218 [2024-11-20 16:10:07.229822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:09.218 [2024-11-20 16:10:07.229828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.240826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.240852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.218 [2024-11-20 16:10:07.240860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.982 ms 00:22:09.218 [2024-11-20 16:10:07.240866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.250524] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:09.218 [2024-11-20 16:10:07.250566] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:09.218 [2024-11-20 16:10:07.250575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.250582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:09.218 [2024-11-20 16:10:07.250589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.620 ms 00:22:09.218 [2024-11-20 16:10:07.250596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.269433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.269469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:09.218 [2024-11-20 16:10:07.269478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.788 ms 00:22:09.218 [2024-11-20 16:10:07.269484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.278774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.278802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:09.218 [2024-11-20 16:10:07.278818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.234 ms 00:22:09.218 [2024-11-20 16:10:07.278824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.287739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.287766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:09.218 [2024-11-20 16:10:07.287773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.873 ms 00:22:09.218 [2024-11-20 16:10:07.287779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.218 [2024-11-20 16:10:07.288247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.218 [2024-11-20 16:10:07.288266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:09.218 [2024-11-20 16:10:07.288274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:22:09.218 [2024-11-20 16:10:07.288280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.332272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.332312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:09.219 [2024-11-20 16:10:07.332322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.975 ms 00:22:09.219 [2024-11-20 16:10:07.332329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.340050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:09.219 [2024-11-20 16:10:07.351650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.351681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:09.219 [2024-11-20 16:10:07.351690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.251 ms 00:22:09.219 [2024-11-20 16:10:07.351701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.351786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.351795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:09.219 [2024-11-20 16:10:07.351802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:09.219 [2024-11-20 16:10:07.351807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.351842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.351850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:09.219 [2024-11-20 16:10:07.351856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:09.219 [2024-11-20 16:10:07.351862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.351888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.351895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:09.219 [2024-11-20 16:10:07.351902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:09.219 [2024-11-20 16:10:07.351907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.351931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:09.219 [2024-11-20 16:10:07.351939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.351944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:09.219 [2024-11-20 16:10:07.351950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:09.219 [2024-11-20 16:10:07.351957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.370053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.370084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:09.219 [2024-11-20 16:10:07.370093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:22:09.219 [2024-11-20 16:10:07.370100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.370173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.219 [2024-11-20 16:10:07.370182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:09.219 [2024-11-20 16:10:07.370189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:09.219 [2024-11-20 16:10:07.370195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.219 [2024-11-20 16:10:07.370842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:09.219 [2024-11-20 16:10:07.373182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.638 ms, result 0 00:22:09.219 [2024-11-20 16:10:07.373956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.219 [2024-11-20 16:10:07.389060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:10.251  [2024-11-20T16:10:09.436Z] Copying: 46/256 [MB] (46 MBps) [2024-11-20T16:10:10.808Z] Copying: 88/256 [MB] (42 MBps) [2024-11-20T16:10:11.743Z] Copying: 131/256 [MB] (42 MBps) [2024-11-20T16:10:12.685Z] Copying: 175/256 [MB] (43 MBps) [2024-11-20T16:10:13.630Z] Copying: 205/256 [MB] (29 MBps) [2024-11-20T16:10:14.572Z] Copying: 217/256 [MB] (12 MBps) [2024-11-20T16:10:15.515Z] Copying: 230/256 [MB] (12 MBps) [2024-11-20T16:10:16.082Z] Copying: 247/256 [MB] (16 MBps) [2024-11-20T16:10:16.654Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 16:10:16.405382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.404 [2024-11-20 16:10:16.416363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.416403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.404 [2024-11-20 16:10:16.416415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.404 [2024-11-20 16:10:16.416429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.416451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:18.404 [2024-11-20 16:10:16.419304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.419336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.404 [2024-11-20 16:10:16.419346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:22:18.404 [2024-11-20 16:10:16.419354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.419708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.419737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.404 [2024-11-20 16:10:16.419746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:18.404 [2024-11-20 16:10:16.419753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.423499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.423527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.404 [2024-11-20 16:10:16.423536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.730 ms 00:22:18.404 [2024-11-20 16:10:16.423545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.430802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.430840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:18.404 [2024-11-20 16:10:16.430854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:22:18.404 [2024-11-20 16:10:16.430867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.455445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.455479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.404 [2024-11-20 16:10:16.455490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.492 ms 00:22:18.404 [2024-11-20 16:10:16.455497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.469876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.469918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.404 [2024-11-20 16:10:16.469933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.339 ms 00:22:18.404 [2024-11-20 16:10:16.469941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.470080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.470090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:18.404 [2024-11-20 16:10:16.470099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:18.404 [2024-11-20 16:10:16.470107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.494707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.494744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:18.404 [2024-11-20 16:10:16.494754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.577 ms 00:22:18.404 [2024-11-20 16:10:16.494761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.518352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.518385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:18.404 [2024-11-20 16:10:16.518395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.534 ms 00:22:18.404 [2024-11-20 16:10:16.518403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.541394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.541425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:18.404 [2024-11-20 16:10:16.541434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.953 ms 00:22:18.404 [2024-11-20 16:10:16.541442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.565094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.404 [2024-11-20 16:10:16.565129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:18.404 [2024-11-20 16:10:16.565141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.590 ms 00:22:18.404 [2024-11-20 16:10:16.565148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.404 [2024-11-20 16:10:16.565185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:18.404 [2024-11-20 16:10:16.565199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:18.404 [2024-11-20 16:10:16.565502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:18.405 [2024-11-20 16:10:16.565979] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:18.405 [2024-11-20 16:10:16.565987] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf7391a4-c2c3-4b8c-8e9f-ec5e557c8bd7 00:22:18.405 [2024-11-20 16:10:16.565994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:18.405 [2024-11-20 16:10:16.566001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:18.405 [2024-11-20 16:10:16.566008] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:18.405 [2024-11-20 16:10:16.566016] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:18.405 [2024-11-20 16:10:16.566023] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:18.405 [2024-11-20 16:10:16.566030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:18.405 [2024-11-20 16:10:16.566038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:18.405 [2024-11-20 16:10:16.566044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:18.405 [2024-11-20 16:10:16.566050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:18.405 [2024-11-20 16:10:16.566057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.405 [2024-11-20 16:10:16.566067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:18.405 [2024-11-20 16:10:16.566075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:22:18.405 [2024-11-20 16:10:16.566081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.405 [2024-11-20 16:10:16.578565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.405 [2024-11-20 16:10:16.578596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:18.405 [2024-11-20 16:10:16.578607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:22:18.405 [2024-11-20 16:10:16.578615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.405 [2024-11-20 16:10:16.579060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.405 [2024-11-20 16:10:16.579080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:18.405 [2024-11-20 16:10:16.579089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:22:18.405 [2024-11-20 16:10:16.579096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.405 [2024-11-20 16:10:16.614347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.405 [2024-11-20 16:10:16.614381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.405 [2024-11-20 16:10:16.614393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.405 [2024-11-20 16:10:16.614401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.405 [2024-11-20 16:10:16.614496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.405 [2024-11-20 16:10:16.614505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.405 [2024-11-20 16:10:16.614515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.405 [2024-11-20 16:10:16.614526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.405 [2024-11-20 16:10:16.614566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.405 [2024-11-20 16:10:16.614576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.406 [2024-11-20 16:10:16.614585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.406 [2024-11-20 16:10:16.614594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.406 [2024-11-20 16:10:16.614611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.406 [2024-11-20 16:10:16.614622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.406 [2024-11-20 16:10:16.614631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.406 [2024-11-20 16:10:16.614639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.692114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.692152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.667 [2024-11-20 16:10:16.692163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.692170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.667 [2024-11-20 16:10:16.756276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.667 [2024-11-20 16:10:16.756373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.667 [2024-11-20 16:10:16.756426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.667 [2024-11-20 16:10:16.756534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.667 [2024-11-20 16:10:16.756589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.667 [2024-11-20 16:10:16.756651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.667 [2024-11-20 16:10:16.756706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.667 [2024-11-20 16:10:16.756716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.667 [2024-11-20 16:10:16.756742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.667 [2024-11-20 16:10:16.756870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.508 ms, result 0 00:22:19.239 00:22:19.239 00:22:19.239 16:10:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:19.811 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:19.811 16:10:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:20.072 Process with pid 76864 is not found 00:22:20.072 16:10:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76864 00:22:20.072 16:10:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76864 ']' 00:22:20.072 16:10:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76864 00:22:20.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76864) - No such process 00:22:20.072 16:10:18 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76864 is not found' 00:22:20.072 00:22:20.072 real 0m52.087s 00:22:20.072 user 1m18.518s 00:22:20.072 sys 0m4.814s 00:22:20.072 ************************************ 00:22:20.072 END TEST ftl_trim 00:22:20.072 ************************************ 00:22:20.072 16:10:18 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.072 16:10:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:20.072 16:10:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:20.072 16:10:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:20.072 16:10:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.072 16:10:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:20.072 ************************************ 00:22:20.072 START TEST ftl_restore 00:22:20.072 ************************************ 00:22:20.072 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:20.072 * Looking for test storage... 00:22:20.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.072 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.072 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.072 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.072 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.072 16:10:18 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.073 16:10:18 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.073 --rc genhtml_branch_coverage=1 00:22:20.073 --rc genhtml_function_coverage=1 00:22:20.073 --rc genhtml_legend=1 00:22:20.073 --rc geninfo_all_blocks=1 00:22:20.073 --rc geninfo_unexecuted_blocks=1 00:22:20.073 00:22:20.073 ' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.073 --rc genhtml_branch_coverage=1 00:22:20.073 --rc genhtml_function_coverage=1 00:22:20.073 --rc genhtml_legend=1 00:22:20.073 --rc geninfo_all_blocks=1 00:22:20.073 --rc geninfo_unexecuted_blocks=1 00:22:20.073 00:22:20.073 ' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.073 --rc genhtml_branch_coverage=1 00:22:20.073 --rc genhtml_function_coverage=1 00:22:20.073 --rc genhtml_legend=1 00:22:20.073 --rc geninfo_all_blocks=1 00:22:20.073 --rc geninfo_unexecuted_blocks=1 00:22:20.073 00:22:20.073 ' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.073 --rc genhtml_branch_coverage=1 00:22:20.073 --rc genhtml_function_coverage=1 00:22:20.073 --rc genhtml_legend=1 00:22:20.073 --rc geninfo_all_blocks=1 00:22:20.073 --rc geninfo_unexecuted_blocks=1 00:22:20.073 00:22:20.073 ' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JD5reCsoX5 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77104 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.073 16:10:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77104 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77104 ']' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.073 16:10:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:20.334 [2024-11-20 16:10:18.386759] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:20.334 [2024-11-20 16:10:18.386880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77104 ] 00:22:20.334 [2024-11-20 16:10:18.543983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.595 [2024-11-20 16:10:18.643244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.170 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.170 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:21.170 16:10:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:21.429 16:10:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:21.429 16:10:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:21.429 16:10:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:21.429 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:21.429 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:21.429 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:21.429 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:21.429 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:21.690 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.690 { 00:22:21.690 "name": "nvme0n1", 00:22:21.690 "aliases": [ 00:22:21.690 "e6da5d78-23e6-4e00-8537-4dc0c6e359e0" 00:22:21.690 ], 00:22:21.690 "product_name": "NVMe disk", 00:22:21.690 "block_size": 4096, 00:22:21.690 "num_blocks": 1310720, 00:22:21.690 "uuid": "e6da5d78-23e6-4e00-8537-4dc0c6e359e0", 00:22:21.690 "numa_id": -1, 00:22:21.690 "assigned_rate_limits": { 00:22:21.690 "rw_ios_per_sec": 0, 00:22:21.690 "rw_mbytes_per_sec": 0, 00:22:21.690 "r_mbytes_per_sec": 0, 00:22:21.690 "w_mbytes_per_sec": 0 00:22:21.690 }, 00:22:21.690 "claimed": true, 00:22:21.690 "claim_type": "read_many_write_one", 00:22:21.690 "zoned": false, 00:22:21.690 "supported_io_types": { 00:22:21.690 "read": true, 00:22:21.690 "write": true, 00:22:21.690 "unmap": true, 00:22:21.690 "flush": true, 00:22:21.690 "reset": true, 00:22:21.690 "nvme_admin": true, 00:22:21.690 "nvme_io": true, 00:22:21.690 "nvme_io_md": false, 00:22:21.690 "write_zeroes": true, 00:22:21.690 "zcopy": false, 00:22:21.690 "get_zone_info": false, 00:22:21.690 "zone_management": false, 00:22:21.690 "zone_append": false, 00:22:21.690 "compare": true, 00:22:21.690 "compare_and_write": false, 00:22:21.690 "abort": true, 00:22:21.690 "seek_hole": false, 00:22:21.690 "seek_data": false, 00:22:21.690 "copy": true, 00:22:21.690 "nvme_iov_md": false 00:22:21.690 }, 00:22:21.690 "driver_specific": { 00:22:21.690 "nvme": [ 00:22:21.690 { 00:22:21.690 "pci_address": "0000:00:11.0", 00:22:21.690 "trid": { 00:22:21.690 "trtype": "PCIe", 00:22:21.690 "traddr": "0000:00:11.0" 00:22:21.690 }, 00:22:21.690 "ctrlr_data": { 00:22:21.690 "cntlid": 0, 00:22:21.690 "vendor_id": "0x1b36", 00:22:21.690 "model_number": "QEMU NVMe Ctrl", 00:22:21.690 "serial_number": "12341", 00:22:21.690 "firmware_revision": "8.0.0", 00:22:21.690 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:21.690 "oacs": { 00:22:21.690 "security": 0, 00:22:21.690 "format": 1, 00:22:21.690 "firmware": 0, 00:22:21.690 "ns_manage": 1 00:22:21.690 }, 00:22:21.690 "multi_ctrlr": false, 00:22:21.690 "ana_reporting": false 00:22:21.690 }, 00:22:21.690 "vs": { 00:22:21.690 "nvme_version": "1.4" 00:22:21.690 }, 00:22:21.690 "ns_data": { 00:22:21.690 "id": 1, 00:22:21.690 "can_share": false 00:22:21.691 } 00:22:21.691 } 00:22:21.691 ], 00:22:21.691 "mp_policy": "active_passive" 00:22:21.691 } 00:22:21.691 } 00:22:21.691 ]' 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:21.691 16:10:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:21.691 16:10:19 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:21.691 16:10:19 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:21.691 16:10:19 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:21.691 16:10:19 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:21.691 16:10:19 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:21.951 16:10:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f 00:22:21.951 16:10:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:21.951 16:10:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a66ceaa5-6a3d-4a28-be08-e97d6aa76e8f 00:22:22.212 16:10:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:22.212 16:10:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e75ea8e7-d933-41ce-9639-1c7f618a92a0 00:22:22.212 16:10:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e75ea8e7-d933-41ce-9639-1c7f618a92a0 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:22.473 16:10:20 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.473 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.473 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.473 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:22.473 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:22.473 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.733 { 00:22:22.733 "name": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:22.733 "aliases": [ 00:22:22.733 "lvs/nvme0n1p0" 00:22:22.733 ], 00:22:22.733 "product_name": "Logical Volume", 00:22:22.733 "block_size": 4096, 00:22:22.733 "num_blocks": 26476544, 00:22:22.733 "uuid": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:22.733 "assigned_rate_limits": { 00:22:22.733 "rw_ios_per_sec": 0, 00:22:22.733 "rw_mbytes_per_sec": 0, 00:22:22.733 "r_mbytes_per_sec": 0, 00:22:22.733 "w_mbytes_per_sec": 0 00:22:22.733 }, 00:22:22.733 "claimed": false, 00:22:22.733 "zoned": false, 00:22:22.733 "supported_io_types": { 00:22:22.733 "read": true, 00:22:22.733 "write": true, 00:22:22.733 "unmap": true, 00:22:22.733 "flush": false, 00:22:22.733 "reset": true, 00:22:22.733 "nvme_admin": false, 00:22:22.733 "nvme_io": false, 00:22:22.733 "nvme_io_md": false, 00:22:22.733 "write_zeroes": true, 00:22:22.733 "zcopy": false, 00:22:22.733 "get_zone_info": false, 00:22:22.733 "zone_management": false, 00:22:22.733 "zone_append": false, 00:22:22.733 "compare": false, 00:22:22.733 "compare_and_write": false, 00:22:22.733 "abort": false, 00:22:22.733 "seek_hole": true, 00:22:22.733 "seek_data": true, 00:22:22.733 "copy": false, 00:22:22.733 "nvme_iov_md": false 00:22:22.733 }, 00:22:22.733 "driver_specific": { 00:22:22.733 "lvol": { 00:22:22.733 "lvol_store_uuid": "e75ea8e7-d933-41ce-9639-1c7f618a92a0", 00:22:22.733 "base_bdev": "nvme0n1", 00:22:22.733 "thin_provision": true, 00:22:22.733 "num_allocated_clusters": 0, 00:22:22.733 "snapshot": false, 00:22:22.733 "clone": false, 00:22:22.733 "esnap_clone": false 00:22:22.733 } 00:22:22.733 } 00:22:22.733 } 00:22:22.733 ]' 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.733 16:10:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.733 16:10:20 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:22.733 16:10:20 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:22.733 16:10:20 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:22.993 16:10:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:22.993 16:10:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:22.993 16:10:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.993 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:22.993 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.993 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:22.993 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:22.993 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.253 { 00:22:23.253 "name": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:23.253 "aliases": [ 00:22:23.253 "lvs/nvme0n1p0" 00:22:23.253 ], 00:22:23.253 "product_name": "Logical Volume", 00:22:23.253 "block_size": 4096, 00:22:23.253 "num_blocks": 26476544, 00:22:23.253 "uuid": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:23.253 "assigned_rate_limits": { 00:22:23.253 "rw_ios_per_sec": 0, 00:22:23.253 "rw_mbytes_per_sec": 0, 00:22:23.253 "r_mbytes_per_sec": 0, 00:22:23.253 "w_mbytes_per_sec": 0 00:22:23.253 }, 00:22:23.253 "claimed": false, 00:22:23.253 "zoned": false, 00:22:23.253 "supported_io_types": { 00:22:23.253 "read": true, 00:22:23.253 "write": true, 00:22:23.253 "unmap": true, 00:22:23.253 "flush": false, 00:22:23.253 "reset": true, 00:22:23.253 "nvme_admin": false, 00:22:23.253 "nvme_io": false, 00:22:23.253 "nvme_io_md": false, 00:22:23.253 "write_zeroes": true, 00:22:23.253 "zcopy": false, 00:22:23.253 "get_zone_info": false, 00:22:23.253 "zone_management": false, 00:22:23.253 "zone_append": false, 00:22:23.253 "compare": false, 00:22:23.253 "compare_and_write": false, 00:22:23.253 "abort": false, 00:22:23.253 "seek_hole": true, 00:22:23.253 "seek_data": true, 00:22:23.253 "copy": false, 00:22:23.253 "nvme_iov_md": false 00:22:23.253 }, 00:22:23.253 "driver_specific": { 00:22:23.253 "lvol": { 00:22:23.253 "lvol_store_uuid": "e75ea8e7-d933-41ce-9639-1c7f618a92a0", 00:22:23.253 "base_bdev": "nvme0n1", 00:22:23.253 "thin_provision": true, 00:22:23.253 "num_allocated_clusters": 0, 00:22:23.253 "snapshot": false, 00:22:23.253 "clone": false, 00:22:23.253 "esnap_clone": false 00:22:23.253 } 00:22:23.253 } 00:22:23.253 } 00:22:23.253 ]' 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.253 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.253 16:10:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:23.253 16:10:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:23.513 16:10:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:23.513 16:10:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:23.513 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:23.513 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.513 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:23.513 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:23.513 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.784 { 00:22:23.784 "name": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:23.784 "aliases": [ 00:22:23.784 "lvs/nvme0n1p0" 00:22:23.784 ], 00:22:23.784 "product_name": "Logical Volume", 00:22:23.784 "block_size": 4096, 00:22:23.784 "num_blocks": 26476544, 00:22:23.784 "uuid": "7de6e8f5-b7c6-4bae-a51c-67082ebed8ce", 00:22:23.784 "assigned_rate_limits": { 00:22:23.784 "rw_ios_per_sec": 0, 00:22:23.784 "rw_mbytes_per_sec": 0, 00:22:23.784 "r_mbytes_per_sec": 0, 00:22:23.784 "w_mbytes_per_sec": 0 00:22:23.784 }, 00:22:23.784 "claimed": false, 00:22:23.784 "zoned": false, 00:22:23.784 "supported_io_types": { 00:22:23.784 "read": true, 00:22:23.784 "write": true, 00:22:23.784 "unmap": true, 00:22:23.784 "flush": false, 00:22:23.784 "reset": true, 00:22:23.784 "nvme_admin": false, 00:22:23.784 "nvme_io": false, 00:22:23.784 "nvme_io_md": false, 00:22:23.784 "write_zeroes": true, 00:22:23.784 "zcopy": false, 00:22:23.784 "get_zone_info": false, 00:22:23.784 "zone_management": false, 00:22:23.784 "zone_append": false, 00:22:23.784 "compare": false, 00:22:23.784 "compare_and_write": false, 00:22:23.784 "abort": false, 00:22:23.784 "seek_hole": true, 00:22:23.784 "seek_data": true, 00:22:23.784 "copy": false, 00:22:23.784 "nvme_iov_md": false 00:22:23.784 }, 00:22:23.784 "driver_specific": { 00:22:23.784 "lvol": { 00:22:23.784 "lvol_store_uuid": "e75ea8e7-d933-41ce-9639-1c7f618a92a0", 00:22:23.784 "base_bdev": "nvme0n1", 00:22:23.784 "thin_provision": true, 00:22:23.784 "num_allocated_clusters": 0, 00:22:23.784 "snapshot": false, 00:22:23.784 "clone": false, 00:22:23.784 "esnap_clone": false 00:22:23.784 } 00:22:23.784 } 00:22:23.784 } 00:22:23.784 ]' 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.784 16:10:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce --l2p_dram_limit 10' 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:23.784 16:10:21 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:23.785 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:23.785 16:10:21 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7de6e8f5-b7c6-4bae-a51c-67082ebed8ce --l2p_dram_limit 10 -c nvc0n1p0 00:22:24.057 [2024-11-20 16:10:22.150520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.150566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.057 [2024-11-20 16:10:22.150582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:24.057 [2024-11-20 16:10:22.150591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.150656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.150666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.057 [2024-11-20 16:10:22.150676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:24.057 [2024-11-20 16:10:22.150683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.150705] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.057 [2024-11-20 16:10:22.151469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.057 [2024-11-20 16:10:22.151497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.151505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.057 [2024-11-20 16:10:22.151515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:22:24.057 [2024-11-20 16:10:22.151522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.151647] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 794ad26d-746e-41d5-9c76-50f7c33cb882 00:22:24.057 [2024-11-20 16:10:22.152716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.153180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:24.057 [2024-11-20 16:10:22.153192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:24.057 [2024-11-20 16:10:22.153203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.158621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.158657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.057 [2024-11-20 16:10:22.158666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.376 ms 00:22:24.057 [2024-11-20 16:10:22.158675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.158775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.158788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.057 [2024-11-20 16:10:22.158796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:24.057 [2024-11-20 16:10:22.158808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.158862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.158874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.057 [2024-11-20 16:10:22.158881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:24.057 [2024-11-20 16:10:22.158892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.158914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.057 [2024-11-20 16:10:22.162584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.162617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.057 [2024-11-20 16:10:22.162629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:22:24.057 [2024-11-20 16:10:22.162637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.162670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.057 [2024-11-20 16:10:22.162678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.057 [2024-11-20 16:10:22.162686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:24.057 [2024-11-20 16:10:22.162693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.057 [2024-11-20 16:10:22.162711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:24.057 [2024-11-20 16:10:22.162858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.057 [2024-11-20 16:10:22.162873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.057 [2024-11-20 16:10:22.162883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.057 [2024-11-20 16:10:22.162894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.057 [2024-11-20 16:10:22.162903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.057 [2024-11-20 16:10:22.162913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.057 [2024-11-20 16:10:22.162920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.057 [2024-11-20 16:10:22.162931] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.058 [2024-11-20 16:10:22.162938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.058 [2024-11-20 16:10:22.162947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.058 [2024-11-20 16:10:22.162954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.058 [2024-11-20 16:10:22.162963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:22:24.058 [2024-11-20 16:10:22.162976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.058 [2024-11-20 16:10:22.163075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.058 [2024-11-20 16:10:22.163085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.058 [2024-11-20 16:10:22.163094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:24.058 [2024-11-20 16:10:22.163101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.058 [2024-11-20 16:10:22.163203] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.058 [2024-11-20 16:10:22.163223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.058 [2024-11-20 16:10:22.163233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.058 [2024-11-20 16:10:22.163257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.058 [2024-11-20 16:10:22.163281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.058 [2024-11-20 16:10:22.163295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.058 [2024-11-20 16:10:22.163301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.058 [2024-11-20 16:10:22.163309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.058 [2024-11-20 16:10:22.163316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.058 [2024-11-20 16:10:22.163324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.058 [2024-11-20 16:10:22.163330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.058 [2024-11-20 16:10:22.163347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.058 [2024-11-20 16:10:22.163373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.058 [2024-11-20 16:10:22.163395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.058 [2024-11-20 16:10:22.163418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.058 [2024-11-20 16:10:22.163439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.058 [2024-11-20 16:10:22.163462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.058 [2024-11-20 16:10:22.163477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.058 [2024-11-20 16:10:22.163483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.058 [2024-11-20 16:10:22.163491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.058 [2024-11-20 16:10:22.163498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.058 [2024-11-20 16:10:22.163506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.058 [2024-11-20 16:10:22.163513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.058 [2024-11-20 16:10:22.163527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.058 [2024-11-20 16:10:22.163535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.058 [2024-11-20 16:10:22.163550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.058 [2024-11-20 16:10:22.163557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.058 [2024-11-20 16:10:22.163574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.058 [2024-11-20 16:10:22.163583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.058 [2024-11-20 16:10:22.163590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.058 [2024-11-20 16:10:22.163598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.058 [2024-11-20 16:10:22.163606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.058 [2024-11-20 16:10:22.163614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.058 [2024-11-20 16:10:22.163624] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.058 [2024-11-20 16:10:22.163634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.058 [2024-11-20 16:10:22.163654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.058 [2024-11-20 16:10:22.163661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.058 [2024-11-20 16:10:22.163669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.058 [2024-11-20 16:10:22.163676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.058 [2024-11-20 16:10:22.163684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.058 [2024-11-20 16:10:22.163691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.058 [2024-11-20 16:10:22.163700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.058 [2024-11-20 16:10:22.163706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.058 [2024-11-20 16:10:22.163717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.058 [2024-11-20 16:10:22.163767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.058 [2024-11-20 16:10:22.163776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.058 [2024-11-20 16:10:22.163793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.058 [2024-11-20 16:10:22.163800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.058 [2024-11-20 16:10:22.163809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.058 [2024-11-20 16:10:22.163816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.058 [2024-11-20 16:10:22.163825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.058 [2024-11-20 16:10:22.163833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:22:24.058 [2024-11-20 16:10:22.163842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.058 [2024-11-20 16:10:22.163878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:24.058 [2024-11-20 16:10:22.163891] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:27.357 [2024-11-20 16:10:25.225728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.225785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:27.357 [2024-11-20 16:10:25.225802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3061.828 ms 00:22:27.357 [2024-11-20 16:10:25.225814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.251283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.251329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.357 [2024-11-20 16:10:25.251341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.267 ms 00:22:27.357 [2024-11-20 16:10:25.251350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.251474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.251486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:27.357 [2024-11-20 16:10:25.251494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:27.357 [2024-11-20 16:10:25.251508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.281680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.281729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.357 [2024-11-20 16:10:25.281741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.136 ms 00:22:27.357 [2024-11-20 16:10:25.281750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.281781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.281794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.357 [2024-11-20 16:10:25.281802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:27.357 [2024-11-20 16:10:25.281811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.282157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.282180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.357 [2024-11-20 16:10:25.282189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:22:27.357 [2024-11-20 16:10:25.282198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.282298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.282314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.357 [2024-11-20 16:10:25.282324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:27.357 [2024-11-20 16:10:25.282335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.296345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.296380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.357 [2024-11-20 16:10:25.296389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.993 ms 00:22:27.357 [2024-11-20 16:10:25.296398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.317079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:27.357 [2024-11-20 16:10:25.319942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.357 [2024-11-20 16:10:25.319976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:27.357 [2024-11-20 16:10:25.319992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.459 ms 00:22:27.357 [2024-11-20 16:10:25.320002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.357 [2024-11-20 16:10:25.398119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.398174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:27.358 [2024-11-20 16:10:25.398190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.075 ms 00:22:27.358 [2024-11-20 16:10:25.398198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.398378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.398392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:27.358 [2024-11-20 16:10:25.398404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:27.358 [2024-11-20 16:10:25.398412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.422826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.422862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:27.358 [2024-11-20 16:10:25.422876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.356 ms 00:22:27.358 [2024-11-20 16:10:25.422885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.446036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.446069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:27.358 [2024-11-20 16:10:25.446082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.110 ms 00:22:27.358 [2024-11-20 16:10:25.446089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.446679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.446696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:27.358 [2024-11-20 16:10:25.446707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:22:27.358 [2024-11-20 16:10:25.446716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.520696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.520741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:27.358 [2024-11-20 16:10:25.520759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.933 ms 00:22:27.358 [2024-11-20 16:10:25.520767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.545514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.545551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:27.358 [2024-11-20 16:10:25.545565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.673 ms 00:22:27.358 [2024-11-20 16:10:25.545572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.568845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.568876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:27.358 [2024-11-20 16:10:25.568890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.233 ms 00:22:27.358 [2024-11-20 16:10:25.568898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.592993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.593028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:27.358 [2024-11-20 16:10:25.593040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.057 ms 00:22:27.358 [2024-11-20 16:10:25.593049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.593088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.593097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:27.358 [2024-11-20 16:10:25.593109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:27.358 [2024-11-20 16:10:25.593116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.593192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.358 [2024-11-20 16:10:25.593201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:27.358 [2024-11-20 16:10:25.593213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:27.358 [2024-11-20 16:10:25.593220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.358 [2024-11-20 16:10:25.594286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3443.365 ms, result 0 00:22:27.358 { 00:22:27.358 "name": "ftl0", 00:22:27.358 "uuid": "794ad26d-746e-41d5-9c76-50f7c33cb882" 00:22:27.358 } 00:22:27.619 16:10:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:27.619 16:10:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:27.619 16:10:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:27.619 16:10:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:27.882 [2024-11-20 16:10:26.005694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.005758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.882 [2024-11-20 16:10:26.005772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:27.882 [2024-11-20 16:10:26.005786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.005809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:27.882 [2024-11-20 16:10:26.008388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.008417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.882 [2024-11-20 16:10:26.008431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:22:27.882 [2024-11-20 16:10:26.008440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.008701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.008718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.882 [2024-11-20 16:10:26.008737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:22:27.882 [2024-11-20 16:10:26.008744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.011983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.012002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.882 [2024-11-20 16:10:26.012014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:22:27.882 [2024-11-20 16:10:26.012022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.018282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.018307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.882 [2024-11-20 16:10:26.018321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:22:27.882 [2024-11-20 16:10:26.018330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.042611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.042644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.882 [2024-11-20 16:10:26.042658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.214 ms 00:22:27.882 [2024-11-20 16:10:26.042665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.057289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.057321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.882 [2024-11-20 16:10:26.057335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.584 ms 00:22:27.882 [2024-11-20 16:10:26.057344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.057490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.057500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.882 [2024-11-20 16:10:26.057511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:27.882 [2024-11-20 16:10:26.057518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.081379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.081409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.882 [2024-11-20 16:10:26.081421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.838 ms 00:22:27.882 [2024-11-20 16:10:26.081428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.105033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.105062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.882 [2024-11-20 16:10:26.105074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.567 ms 00:22:27.882 [2024-11-20 16:10:26.105081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.882 [2024-11-20 16:10:26.127778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.882 [2024-11-20 16:10:26.127810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.882 [2024-11-20 16:10:26.127822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.660 ms 00:22:27.882 [2024-11-20 16:10:26.127831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.144 [2024-11-20 16:10:26.150496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.144 [2024-11-20 16:10:26.150526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:28.144 [2024-11-20 16:10:26.150538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.594 ms 00:22:28.144 [2024-11-20 16:10:26.150547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.144 [2024-11-20 16:10:26.150582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:28.144 [2024-11-20 16:10:26.150596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:28.144 [2024-11-20 16:10:26.150778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.150998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:28.145 [2024-11-20 16:10:26.151467] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:28.145 [2024-11-20 16:10:26.151478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794ad26d-746e-41d5-9c76-50f7c33cb882 00:22:28.145 [2024-11-20 16:10:26.151486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:28.145 [2024-11-20 16:10:26.151496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:28.145 [2024-11-20 16:10:26.151503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:28.145 [2024-11-20 16:10:26.151515] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:28.145 [2024-11-20 16:10:26.151522] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:28.145 [2024-11-20 16:10:26.151531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:28.145 [2024-11-20 16:10:26.151538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:28.145 [2024-11-20 16:10:26.151546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:28.146 [2024-11-20 16:10:26.151552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:28.146 [2024-11-20 16:10:26.151561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.146 [2024-11-20 16:10:26.151568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:28.146 [2024-11-20 16:10:26.151578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:22:28.146 [2024-11-20 16:10:26.151585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.164077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.146 [2024-11-20 16:10:26.164105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:28.146 [2024-11-20 16:10:26.164117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:22:28.146 [2024-11-20 16:10:26.164126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.164481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.146 [2024-11-20 16:10:26.164490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:28.146 [2024-11-20 16:10:26.164501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:28.146 [2024-11-20 16:10:26.164508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.205535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.205570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.146 [2024-11-20 16:10:26.205583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.205592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.205663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.205672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.146 [2024-11-20 16:10:26.205684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.205692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.205789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.205801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.146 [2024-11-20 16:10:26.205812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.205820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.205842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.205850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.146 [2024-11-20 16:10:26.205860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.205868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.282946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.283004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.146 [2024-11-20 16:10:26.283018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.283026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.345891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.345940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.146 [2024-11-20 16:10:26.345955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.345965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.146 [2024-11-20 16:10:26.346062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.146 [2024-11-20 16:10:26.346154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.146 [2024-11-20 16:10:26.346275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.146 [2024-11-20 16:10:26.346332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.146 [2024-11-20 16:10:26.346396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.146 [2024-11-20 16:10:26.346455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.146 [2024-11-20 16:10:26.346464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.146 [2024-11-20 16:10:26.346471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.146 [2024-11-20 16:10:26.346596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.868 ms, result 0 00:22:28.146 true 00:22:28.146 16:10:26 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77104 00:22:28.146 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77104 ']' 00:22:28.146 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77104 00:22:28.146 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:28.146 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.146 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77104 00:22:28.407 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.407 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.407 killing process with pid 77104 00:22:28.407 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77104' 00:22:28.407 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77104 00:22:28.407 16:10:26 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77104 00:22:36.546 16:10:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:39.849 262144+0 records in 00:22:39.849 262144+0 records out 00:22:39.849 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28505 s, 251 MB/s 00:22:39.849 16:10:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:41.766 16:10:39 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:41.766 [2024-11-20 16:10:39.977811] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:41.766 [2024-11-20 16:10:39.977928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77336 ] 00:22:42.028 [2024-11-20 16:10:40.135211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.028 [2024-11-20 16:10:40.233197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.287 [2024-11-20 16:10:40.489758] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:42.287 [2024-11-20 16:10:40.489816] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:42.550 [2024-11-20 16:10:40.648123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.648169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:42.550 [2024-11-20 16:10:40.648186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:42.550 [2024-11-20 16:10:40.648196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.648246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.648256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.550 [2024-11-20 16:10:40.648267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:42.550 [2024-11-20 16:10:40.648274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.648293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:42.550 [2024-11-20 16:10:40.649029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:42.550 [2024-11-20 16:10:40.649053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.649061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.550 [2024-11-20 16:10:40.649069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:22:42.550 [2024-11-20 16:10:40.649077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.650099] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:42.550 [2024-11-20 16:10:40.662612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.662659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:42.550 [2024-11-20 16:10:40.662672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.514 ms 00:22:42.550 [2024-11-20 16:10:40.662681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.662754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.662764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:42.550 [2024-11-20 16:10:40.662775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:42.550 [2024-11-20 16:10:40.662783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.667610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.667641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.550 [2024-11-20 16:10:40.667652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:22:42.550 [2024-11-20 16:10:40.667665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.667743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.667752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.550 [2024-11-20 16:10:40.667760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:42.550 [2024-11-20 16:10:40.667767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.667818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.667829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:42.550 [2024-11-20 16:10:40.667837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:42.550 [2024-11-20 16:10:40.667845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.550 [2024-11-20 16:10:40.667869] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:42.550 [2024-11-20 16:10:40.671371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.550 [2024-11-20 16:10:40.671406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.550 [2024-11-20 16:10:40.671418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:22:42.551 [2024-11-20 16:10:40.671428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.551 [2024-11-20 16:10:40.671457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.551 [2024-11-20 16:10:40.671464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:42.551 [2024-11-20 16:10:40.671472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:42.551 [2024-11-20 16:10:40.671480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.551 [2024-11-20 16:10:40.671499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:42.551 [2024-11-20 16:10:40.671516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:42.551 [2024-11-20 16:10:40.671550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:42.551 [2024-11-20 16:10:40.671568] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:42.551 [2024-11-20 16:10:40.671669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:42.551 [2024-11-20 16:10:40.671679] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:42.551 [2024-11-20 16:10:40.671689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:42.551 [2024-11-20 16:10:40.671699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:42.551 [2024-11-20 16:10:40.671708] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:42.551 [2024-11-20 16:10:40.671741] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:42.551 [2024-11-20 16:10:40.671751] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:42.551 [2024-11-20 16:10:40.671758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:42.551 [2024-11-20 16:10:40.671768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:42.551 [2024-11-20 16:10:40.671776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.551 [2024-11-20 16:10:40.671783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:42.551 [2024-11-20 16:10:40.671791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:42.551 [2024-11-20 16:10:40.671798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.551 [2024-11-20 16:10:40.671881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.551 [2024-11-20 16:10:40.671889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:42.551 [2024-11-20 16:10:40.671896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:42.551 [2024-11-20 16:10:40.671904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.551 [2024-11-20 16:10:40.672020] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:42.551 [2024-11-20 16:10:40.672036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:42.551 [2024-11-20 16:10:40.672044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:42.551 [2024-11-20 16:10:40.672066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:42.551 [2024-11-20 16:10:40.672086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:42.551 [2024-11-20 16:10:40.672100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:42.551 [2024-11-20 16:10:40.672106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:42.551 [2024-11-20 16:10:40.672113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:42.551 [2024-11-20 16:10:40.672119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:42.551 [2024-11-20 16:10:40.672128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:42.551 [2024-11-20 16:10:40.672140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:42.551 [2024-11-20 16:10:40.672153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:42.551 [2024-11-20 16:10:40.672172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:42.551 [2024-11-20 16:10:40.672191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:42.551 [2024-11-20 16:10:40.672210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:42.551 [2024-11-20 16:10:40.672229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:42.551 [2024-11-20 16:10:40.672248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:42.551 [2024-11-20 16:10:40.672260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:42.551 [2024-11-20 16:10:40.672266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:42.551 [2024-11-20 16:10:40.672272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:42.551 [2024-11-20 16:10:40.672279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:42.551 [2024-11-20 16:10:40.672285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:42.551 [2024-11-20 16:10:40.672291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:42.551 [2024-11-20 16:10:40.672305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:42.551 [2024-11-20 16:10:40.672311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672317] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:42.551 [2024-11-20 16:10:40.672324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:42.551 [2024-11-20 16:10:40.672331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:42.551 [2024-11-20 16:10:40.672347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:42.551 [2024-11-20 16:10:40.672353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:42.551 [2024-11-20 16:10:40.672360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:42.551 [2024-11-20 16:10:40.672366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:42.551 [2024-11-20 16:10:40.672372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:42.551 [2024-11-20 16:10:40.672379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:42.551 [2024-11-20 16:10:40.672386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:42.551 [2024-11-20 16:10:40.672395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:42.551 [2024-11-20 16:10:40.672410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:42.551 [2024-11-20 16:10:40.672417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:42.551 [2024-11-20 16:10:40.672424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:42.551 [2024-11-20 16:10:40.672430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:42.551 [2024-11-20 16:10:40.672437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:42.551 [2024-11-20 16:10:40.672444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:42.551 [2024-11-20 16:10:40.672451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:42.551 [2024-11-20 16:10:40.672457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:42.551 [2024-11-20 16:10:40.672464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:42.551 [2024-11-20 16:10:40.672497] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:42.551 [2024-11-20 16:10:40.672507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:42.551 [2024-11-20 16:10:40.672514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:42.552 [2024-11-20 16:10:40.672521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:42.552 [2024-11-20 16:10:40.672528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:42.552 [2024-11-20 16:10:40.672536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:42.552 [2024-11-20 16:10:40.672543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.672550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:42.552 [2024-11-20 16:10:40.672558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:22:42.552 [2024-11-20 16:10:40.672566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.698475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.698509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.552 [2024-11-20 16:10:40.698519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.857 ms 00:22:42.552 [2024-11-20 16:10:40.698526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.698611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.698619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:42.552 [2024-11-20 16:10:40.698627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:42.552 [2024-11-20 16:10:40.698635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.740204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.740241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.552 [2024-11-20 16:10:40.740253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.519 ms 00:22:42.552 [2024-11-20 16:10:40.740262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.740304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.740313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.552 [2024-11-20 16:10:40.740325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:42.552 [2024-11-20 16:10:40.740332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.740705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.740748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.552 [2024-11-20 16:10:40.740762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:42.552 [2024-11-20 16:10:40.740774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.740915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.740927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.552 [2024-11-20 16:10:40.740935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:22:42.552 [2024-11-20 16:10:40.740946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.754008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.754039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.552 [2024-11-20 16:10:40.754052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.042 ms 00:22:42.552 [2024-11-20 16:10:40.754061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.766764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:42.552 [2024-11-20 16:10:40.766799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:42.552 [2024-11-20 16:10:40.766812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.766821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:42.552 [2024-11-20 16:10:40.766831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:22:42.552 [2024-11-20 16:10:40.766839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.552 [2024-11-20 16:10:40.791447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.552 [2024-11-20 16:10:40.791489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:42.552 [2024-11-20 16:10:40.791500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.567 ms 00:22:42.552 [2024-11-20 16:10:40.791509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.813 [2024-11-20 16:10:40.803649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.813 [2024-11-20 16:10:40.803689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:42.813 [2024-11-20 16:10:40.803699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.099 ms 00:22:42.813 [2024-11-20 16:10:40.803706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.813 [2024-11-20 16:10:40.815420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.813 [2024-11-20 16:10:40.815452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:42.814 [2024-11-20 16:10:40.815462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.672 ms 00:22:42.814 [2024-11-20 16:10:40.815470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.816096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.816119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:42.814 [2024-11-20 16:10:40.816128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:22:42.814 [2024-11-20 16:10:40.816136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.871493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.871535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:42.814 [2024-11-20 16:10:40.871548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.338 ms 00:22:42.814 [2024-11-20 16:10:40.871561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.881958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:42.814 [2024-11-20 16:10:40.884478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.884506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:42.814 [2024-11-20 16:10:40.884517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.873 ms 00:22:42.814 [2024-11-20 16:10:40.884527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.884623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.884634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:42.814 [2024-11-20 16:10:40.884644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:42.814 [2024-11-20 16:10:40.884653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.884744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.884756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:42.814 [2024-11-20 16:10:40.884766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:42.814 [2024-11-20 16:10:40.884774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.884793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.884802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:42.814 [2024-11-20 16:10:40.884811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:42.814 [2024-11-20 16:10:40.884819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.884850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:42.814 [2024-11-20 16:10:40.884861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.884872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:42.814 [2024-11-20 16:10:40.884881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:42.814 [2024-11-20 16:10:40.884890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.908144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.908176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:42.814 [2024-11-20 16:10:40.908188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.237 ms 00:22:42.814 [2024-11-20 16:10:40.908196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.908269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.814 [2024-11-20 16:10:40.908278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:42.814 [2024-11-20 16:10:40.908286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:42.814 [2024-11-20 16:10:40.908293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.814 [2024-11-20 16:10:40.909326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.803 ms, result 0 00:22:43.757  [2024-11-20T16:10:42.949Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T16:10:44.332Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-20T16:10:45.329Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-20T16:10:46.269Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-20T16:10:47.213Z] Copying: 49560/1048576 [kB] (7176 kBps) [2024-11-20T16:10:48.155Z] Copying: 59364/1048576 [kB] (9804 kBps) [2024-11-20T16:10:49.096Z] Copying: 69604/1048576 [kB] (10240 kBps) [2024-11-20T16:10:50.034Z] Copying: 78/1024 [MB] (10 MBps) [2024-11-20T16:10:50.980Z] Copying: 88/1024 [MB] (10 MBps) [2024-11-20T16:10:51.925Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-20T16:10:53.327Z] Copying: 111368/1048576 [kB] (9984 kBps) [2024-11-20T16:10:54.270Z] Copying: 120960/1048576 [kB] (9592 kBps) [2024-11-20T16:10:55.214Z] Copying: 130336/1048576 [kB] (9376 kBps) [2024-11-20T16:10:56.157Z] Copying: 139768/1048576 [kB] (9432 kBps) [2024-11-20T16:10:57.101Z] Copying: 149216/1048576 [kB] (9448 kBps) [2024-11-20T16:10:58.072Z] Copying: 158548/1048576 [kB] (9332 kBps) [2024-11-20T16:10:59.058Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-20T16:11:00.010Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-20T16:11:00.952Z] Copying: 189360/1048576 [kB] (9504 kBps) [2024-11-20T16:11:02.337Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-20T16:11:03.278Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-20T16:11:04.222Z] Copying: 221200/1048576 [kB] (10084 kBps) [2024-11-20T16:11:05.165Z] Copying: 230/1024 [MB] (14 MBps) [2024-11-20T16:11:06.111Z] Copying: 240/1024 [MB] (10 MBps) [2024-11-20T16:11:07.059Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-20T16:11:08.004Z] Copying: 266212/1048576 [kB] (9396 kBps) [2024-11-20T16:11:08.948Z] Copying: 275996/1048576 [kB] (9784 kBps) [2024-11-20T16:11:09.949Z] Copying: 286040/1048576 [kB] (10044 kBps) [2024-11-20T16:11:10.949Z] Copying: 295288/1048576 [kB] (9248 kBps) [2024-11-20T16:11:12.338Z] Copying: 304632/1048576 [kB] (9344 kBps) [2024-11-20T16:11:13.282Z] Copying: 314444/1048576 [kB] (9812 kBps) [2024-11-20T16:11:14.225Z] Copying: 324064/1048576 [kB] (9620 kBps) [2024-11-20T16:11:15.169Z] Copying: 332456/1048576 [kB] (8392 kBps) [2024-11-20T16:11:16.193Z] Copying: 342228/1048576 [kB] (9772 kBps) [2024-11-20T16:11:17.135Z] Copying: 352140/1048576 [kB] (9912 kBps) [2024-11-20T16:11:18.217Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-20T16:11:19.159Z] Copying: 372516/1048576 [kB] (10028 kBps) [2024-11-20T16:11:20.102Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-20T16:11:21.044Z] Copying: 392768/1048576 [kB] (9896 kBps) [2024-11-20T16:11:21.986Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-20T16:11:22.930Z] Copying: 413592/1048576 [kB] (10188 kBps) [2024-11-20T16:11:24.338Z] Copying: 423308/1048576 [kB] (9716 kBps) [2024-11-20T16:11:25.284Z] Copying: 432464/1048576 [kB] (9156 kBps) [2024-11-20T16:11:26.227Z] Copying: 442068/1048576 [kB] (9604 kBps) [2024-11-20T16:11:27.168Z] Copying: 451876/1048576 [kB] (9808 kBps) [2024-11-20T16:11:28.110Z] Copying: 461428/1048576 [kB] (9552 kBps) [2024-11-20T16:11:29.055Z] Copying: 471216/1048576 [kB] (9788 kBps) [2024-11-20T16:11:30.000Z] Copying: 480760/1048576 [kB] (9544 kBps) [2024-11-20T16:11:30.944Z] Copying: 489936/1048576 [kB] (9176 kBps) [2024-11-20T16:11:32.333Z] Copying: 498640/1048576 [kB] (8704 kBps) [2024-11-20T16:11:33.278Z] Copying: 507512/1048576 [kB] (8872 kBps) [2024-11-20T16:11:34.269Z] Copying: 516760/1048576 [kB] (9248 kBps) [2024-11-20T16:11:35.229Z] Copying: 525840/1048576 [kB] (9080 kBps) [2024-11-20T16:11:36.172Z] Copying: 535768/1048576 [kB] (9928 kBps) [2024-11-20T16:11:37.117Z] Copying: 545684/1048576 [kB] (9916 kBps) [2024-11-20T16:11:38.059Z] Copying: 555328/1048576 [kB] (9644 kBps) [2024-11-20T16:11:39.003Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-20T16:11:39.946Z] Copying: 563/1024 [MB] (10 MBps) [2024-11-20T16:11:41.331Z] Copying: 574/1024 [MB] (10 MBps) [2024-11-20T16:11:41.971Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-20T16:11:43.355Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-20T16:11:43.927Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-20T16:11:45.315Z] Copying: 619/1024 [MB] (12 MBps) [2024-11-20T16:11:46.256Z] Copying: 630/1024 [MB] (10 MBps) [2024-11-20T16:11:47.199Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-20T16:11:48.143Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-20T16:11:49.117Z] Copying: 663/1024 [MB] (11 MBps) [2024-11-20T16:11:50.060Z] Copying: 674/1024 [MB] (11 MBps) [2024-11-20T16:11:50.999Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-20T16:11:51.941Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-20T16:11:53.328Z] Copying: 707/1024 [MB] (11 MBps) [2024-11-20T16:11:54.271Z] Copying: 732624/1048576 [kB] (7752 kBps) [2024-11-20T16:11:55.244Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-20T16:11:56.191Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-20T16:11:57.136Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-20T16:11:58.078Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-20T16:11:59.022Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-20T16:11:59.967Z] Copying: 796400/1048576 [kB] (9816 kBps) [2024-11-20T16:12:01.357Z] Copying: 806268/1048576 [kB] (9868 kBps) [2024-11-20T16:12:01.930Z] Copying: 815872/1048576 [kB] (9604 kBps) [2024-11-20T16:12:03.318Z] Copying: 825544/1048576 [kB] (9672 kBps) [2024-11-20T16:12:04.258Z] Copying: 834744/1048576 [kB] (9200 kBps) [2024-11-20T16:12:05.201Z] Copying: 844692/1048576 [kB] (9948 kBps) [2024-11-20T16:12:06.146Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-20T16:12:07.090Z] Copying: 865300/1048576 [kB] (9932 kBps) [2024-11-20T16:12:08.033Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-20T16:12:08.978Z] Copying: 885856/1048576 [kB] (9808 kBps) [2024-11-20T16:12:10.360Z] Copying: 895124/1048576 [kB] (9268 kBps) [2024-11-20T16:12:10.931Z] Copying: 905140/1048576 [kB] (10016 kBps) [2024-11-20T16:12:12.318Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-20T16:12:13.261Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-20T16:12:14.231Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-20T16:12:15.176Z] Copying: 946496/1048576 [kB] (9976 kBps) [2024-11-20T16:12:16.118Z] Copying: 934/1024 [MB] (10 MBps) [2024-11-20T16:12:17.061Z] Copying: 967020/1048576 [kB] (9732 kBps) [2024-11-20T16:12:18.004Z] Copying: 977064/1048576 [kB] (10044 kBps) [2024-11-20T16:12:18.948Z] Copying: 987104/1048576 [kB] (10040 kBps) [2024-11-20T16:12:20.335Z] Copying: 996848/1048576 [kB] (9744 kBps) [2024-11-20T16:12:21.277Z] Copying: 1006644/1048576 [kB] (9796 kBps) [2024-11-20T16:12:22.223Z] Copying: 1016416/1048576 [kB] (9772 kBps) [2024-11-20T16:12:23.166Z] Copying: 1025772/1048576 [kB] (9356 kBps) [2024-11-20T16:12:24.109Z] Copying: 1035012/1048576 [kB] (9240 kBps) [2024-11-20T16:12:24.684Z] Copying: 1044108/1048576 [kB] (9096 kBps) [2024-11-20T16:12:24.684Z] Copying: 1024/1024 [MB] (average 10135 kBps)[2024-11-20 16:12:24.378808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.378919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.434 [2024-11-20 16:12:24.378951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.434 [2024-11-20 16:12:24.378971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.379006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.434 [2024-11-20 16:12:24.381674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.381792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.434 [2024-11-20 16:12:24.381851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:24:26.434 [2024-11-20 16:12:24.381881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.384951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.385051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.434 [2024-11-20 16:12:24.385102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:24:26.434 [2024-11-20 16:12:24.385124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.403209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.403316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.434 [2024-11-20 16:12:24.403369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.056 ms 00:24:26.434 [2024-11-20 16:12:24.403392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.409603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.409632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:26.434 [2024-11-20 16:12:24.409643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:24:26.434 [2024-11-20 16:12:24.409651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.433512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.433545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.434 [2024-11-20 16:12:24.433556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.814 ms 00:24:26.434 [2024-11-20 16:12:24.433565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.447141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.447172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.434 [2024-11-20 16:12:24.447183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.543 ms 00:24:26.434 [2024-11-20 16:12:24.447192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.447315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.447325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.434 [2024-11-20 16:12:24.447338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:26.434 [2024-11-20 16:12:24.447345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.471012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.471041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.434 [2024-11-20 16:12:24.471052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.653 ms 00:24:26.434 [2024-11-20 16:12:24.471060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.494602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.494631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.434 [2024-11-20 16:12:24.494649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.510 ms 00:24:26.434 [2024-11-20 16:12:24.494657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.517742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.517770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.434 [2024-11-20 16:12:24.517781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.054 ms 00:24:26.434 [2024-11-20 16:12:24.517789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.540537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.434 [2024-11-20 16:12:24.540565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.434 [2024-11-20 16:12:24.540575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.694 ms 00:24:26.434 [2024-11-20 16:12:24.540583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.434 [2024-11-20 16:12:24.540614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.434 [2024-11-20 16:12:24.540629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.434 [2024-11-20 16:12:24.540746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.540992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.435 [2024-11-20 16:12:24.541397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.435 [2024-11-20 16:12:24.541408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794ad26d-746e-41d5-9c76-50f7c33cb882 00:24:26.435 [2024-11-20 16:12:24.541418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:26.436 [2024-11-20 16:12:24.541425] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:26.436 [2024-11-20 16:12:24.541432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:26.436 [2024-11-20 16:12:24.541439] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:26.436 [2024-11-20 16:12:24.541447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.436 [2024-11-20 16:12:24.541454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.436 [2024-11-20 16:12:24.541461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.436 [2024-11-20 16:12:24.541473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.436 [2024-11-20 16:12:24.541479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.436 [2024-11-20 16:12:24.541487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.436 [2024-11-20 16:12:24.541494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.436 [2024-11-20 16:12:24.541502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:24:26.436 [2024-11-20 16:12:24.541509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.553610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.436 [2024-11-20 16:12:24.553639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.436 [2024-11-20 16:12:24.553649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 00:24:26.436 [2024-11-20 16:12:24.553656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.554007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.436 [2024-11-20 16:12:24.554021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.436 [2024-11-20 16:12:24.554029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:24:26.436 [2024-11-20 16:12:24.554036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.586420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.436 [2024-11-20 16:12:24.586455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.436 [2024-11-20 16:12:24.586466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.436 [2024-11-20 16:12:24.586476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.586529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.436 [2024-11-20 16:12:24.586537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.436 [2024-11-20 16:12:24.586544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.436 [2024-11-20 16:12:24.586552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.586626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.436 [2024-11-20 16:12:24.586636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.436 [2024-11-20 16:12:24.586643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.436 [2024-11-20 16:12:24.586650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.586665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.436 [2024-11-20 16:12:24.586672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.436 [2024-11-20 16:12:24.586680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.436 [2024-11-20 16:12:24.586688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.436 [2024-11-20 16:12:24.663162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.436 [2024-11-20 16:12:24.663209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.436 [2024-11-20 16:12:24.663221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.436 [2024-11-20 16:12:24.663229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.697 [2024-11-20 16:12:24.725255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.697 [2024-11-20 16:12:24.725267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.697 [2024-11-20 16:12:24.725274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.697 [2024-11-20 16:12:24.725338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.697 [2024-11-20 16:12:24.725346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.697 [2024-11-20 16:12:24.725353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.697 [2024-11-20 16:12:24.725413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.697 [2024-11-20 16:12:24.725422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.697 [2024-11-20 16:12:24.725429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.697 [2024-11-20 16:12:24.725522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.697 [2024-11-20 16:12:24.725530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.697 [2024-11-20 16:12:24.725538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.697 [2024-11-20 16:12:24.725575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.697 [2024-11-20 16:12:24.725583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.697 [2024-11-20 16:12:24.725591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.697 [2024-11-20 16:12:24.725622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.698 [2024-11-20 16:12:24.725633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.698 [2024-11-20 16:12:24.725641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.698 [2024-11-20 16:12:24.725648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.698 [2024-11-20 16:12:24.725687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.698 [2024-11-20 16:12:24.725696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.698 [2024-11-20 16:12:24.725704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.698 [2024-11-20 16:12:24.725711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.698 [2024-11-20 16:12:24.725837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.000 ms, result 0 00:24:28.615 00:24:28.615 00:24:28.615 16:12:26 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:28.615 [2024-11-20 16:12:26.494755] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:24:28.615 [2024-11-20 16:12:26.494874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78419 ] 00:24:28.615 [2024-11-20 16:12:26.655673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.615 [2024-11-20 16:12:26.757071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.877 [2024-11-20 16:12:27.015312] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.877 [2024-11-20 16:12:27.015369] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:29.138 [2024-11-20 16:12:27.171454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.138 [2024-11-20 16:12:27.171494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:29.138 [2024-11-20 16:12:27.171509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.138 [2024-11-20 16:12:27.171517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.138 [2024-11-20 16:12:27.171558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.138 [2024-11-20 16:12:27.171568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.139 [2024-11-20 16:12:27.171578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:29.139 [2024-11-20 16:12:27.171585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.171601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:29.139 [2024-11-20 16:12:27.172297] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:29.139 [2024-11-20 16:12:27.172316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.172324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.139 [2024-11-20 16:12:27.172333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:24:29.139 [2024-11-20 16:12:27.172340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.173380] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:29.139 [2024-11-20 16:12:27.186296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.186324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:29.139 [2024-11-20 16:12:27.186335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.918 ms 00:24:29.139 [2024-11-20 16:12:27.186344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.186395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.186404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:29.139 [2024-11-20 16:12:27.186412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:29.139 [2024-11-20 16:12:27.186419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.191307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.191332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.139 [2024-11-20 16:12:27.191341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:24:29.139 [2024-11-20 16:12:27.191353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.191419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.191427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.139 [2024-11-20 16:12:27.191434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:29.139 [2024-11-20 16:12:27.191441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.191479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.191488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:29.139 [2024-11-20 16:12:27.191496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.139 [2024-11-20 16:12:27.191503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.191524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:29.139 [2024-11-20 16:12:27.194671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.194693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.139 [2024-11-20 16:12:27.194703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:24:29.139 [2024-11-20 16:12:27.194713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.194749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.194758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:29.139 [2024-11-20 16:12:27.194767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:29.139 [2024-11-20 16:12:27.194775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.194794] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:29.139 [2024-11-20 16:12:27.194813] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:29.139 [2024-11-20 16:12:27.194849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:29.139 [2024-11-20 16:12:27.194867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:29.139 [2024-11-20 16:12:27.194969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:29.139 [2024-11-20 16:12:27.194980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:29.139 [2024-11-20 16:12:27.194992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:29.139 [2024-11-20 16:12:27.195002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195012] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195020] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:29.139 [2024-11-20 16:12:27.195027] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:29.139 [2024-11-20 16:12:27.195034] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:29.139 [2024-11-20 16:12:27.195043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:29.139 [2024-11-20 16:12:27.195051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.195058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:29.139 [2024-11-20 16:12:27.195065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:24:29.139 [2024-11-20 16:12:27.195072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.195153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.139 [2024-11-20 16:12:27.195161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:29.139 [2024-11-20 16:12:27.195168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:29.139 [2024-11-20 16:12:27.195175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.139 [2024-11-20 16:12:27.195276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:29.139 [2024-11-20 16:12:27.195285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:29.139 [2024-11-20 16:12:27.195293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:29.139 [2024-11-20 16:12:27.195314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:29.139 [2024-11-20 16:12:27.195336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.139 [2024-11-20 16:12:27.195348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:29.139 [2024-11-20 16:12:27.195354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:29.139 [2024-11-20 16:12:27.195361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.139 [2024-11-20 16:12:27.195368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:29.139 [2024-11-20 16:12:27.195375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:29.139 [2024-11-20 16:12:27.195386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:29.139 [2024-11-20 16:12:27.195399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:29.139 [2024-11-20 16:12:27.195418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:29.139 [2024-11-20 16:12:27.195437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:29.139 [2024-11-20 16:12:27.195456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:29.139 [2024-11-20 16:12:27.195475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.139 [2024-11-20 16:12:27.195488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:29.139 [2024-11-20 16:12:27.195494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:29.139 [2024-11-20 16:12:27.195500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.140 [2024-11-20 16:12:27.195506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:29.140 [2024-11-20 16:12:27.195513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:29.140 [2024-11-20 16:12:27.195519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.140 [2024-11-20 16:12:27.195525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:29.140 [2024-11-20 16:12:27.195532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:29.140 [2024-11-20 16:12:27.195538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.140 [2024-11-20 16:12:27.195544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:29.140 [2024-11-20 16:12:27.195550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:29.140 [2024-11-20 16:12:27.195557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.140 [2024-11-20 16:12:27.195563] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:29.140 [2024-11-20 16:12:27.195570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:29.140 [2024-11-20 16:12:27.195579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.140 [2024-11-20 16:12:27.195586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.140 [2024-11-20 16:12:27.195593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:29.140 [2024-11-20 16:12:27.195600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:29.140 [2024-11-20 16:12:27.195607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:29.140 [2024-11-20 16:12:27.195613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:29.140 [2024-11-20 16:12:27.195620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:29.140 [2024-11-20 16:12:27.195626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:29.140 [2024-11-20 16:12:27.195634] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:29.140 [2024-11-20 16:12:27.195642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:29.140 [2024-11-20 16:12:27.195657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:29.140 [2024-11-20 16:12:27.195664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:29.140 [2024-11-20 16:12:27.195671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:29.140 [2024-11-20 16:12:27.195677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:29.140 [2024-11-20 16:12:27.195684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:29.140 [2024-11-20 16:12:27.195691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:29.140 [2024-11-20 16:12:27.195698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:29.140 [2024-11-20 16:12:27.195705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:29.140 [2024-11-20 16:12:27.195711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:29.140 [2024-11-20 16:12:27.195757] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:29.140 [2024-11-20 16:12:27.195767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:29.140 [2024-11-20 16:12:27.195783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:29.140 [2024-11-20 16:12:27.195790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:29.140 [2024-11-20 16:12:27.195797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:29.140 [2024-11-20 16:12:27.195804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.195811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:29.140 [2024-11-20 16:12:27.195819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:24:29.140 [2024-11-20 16:12:27.195826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.221386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.221416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.140 [2024-11-20 16:12:27.221426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:24:29.140 [2024-11-20 16:12:27.221436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.221514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.221522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:29.140 [2024-11-20 16:12:27.221530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:29.140 [2024-11-20 16:12:27.221537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.270802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.270838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.140 [2024-11-20 16:12:27.270850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.217 ms 00:24:29.140 [2024-11-20 16:12:27.270860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.270902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.270913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.140 [2024-11-20 16:12:27.270925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:29.140 [2024-11-20 16:12:27.270933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.271294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.271310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.140 [2024-11-20 16:12:27.271320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:29.140 [2024-11-20 16:12:27.271329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.271455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.271464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.140 [2024-11-20 16:12:27.271476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:29.140 [2024-11-20 16:12:27.271483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.284334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.284360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.140 [2024-11-20 16:12:27.284370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.834 ms 00:24:29.140 [2024-11-20 16:12:27.284377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.140 [2024-11-20 16:12:27.297376] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:29.140 [2024-11-20 16:12:27.297405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.140 [2024-11-20 16:12:27.297417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.140 [2024-11-20 16:12:27.297425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.141 [2024-11-20 16:12:27.297432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:24:29.141 [2024-11-20 16:12:27.297439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.141 [2024-11-20 16:12:27.321901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.141 [2024-11-20 16:12:27.321930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.141 [2024-11-20 16:12:27.321940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:24:29.141 [2024-11-20 16:12:27.321948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.141 [2024-11-20 16:12:27.333790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.141 [2024-11-20 16:12:27.333814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.141 [2024-11-20 16:12:27.333823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.805 ms 00:24:29.141 [2024-11-20 16:12:27.333830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.141 [2024-11-20 16:12:27.345446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.141 [2024-11-20 16:12:27.345475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.141 [2024-11-20 16:12:27.345486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.581 ms 00:24:29.141 [2024-11-20 16:12:27.345494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.141 [2024-11-20 16:12:27.346115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.141 [2024-11-20 16:12:27.346136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.141 [2024-11-20 16:12:27.346147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:24:29.141 [2024-11-20 16:12:27.346155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.402576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.402629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.402 [2024-11-20 16:12:27.402648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.403 ms 00:24:29.402 [2024-11-20 16:12:27.402656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.413059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:29.402 [2024-11-20 16:12:27.415510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.415535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.402 [2024-11-20 16:12:27.415547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.802 ms 00:24:29.402 [2024-11-20 16:12:27.415556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.415657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.415669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.402 [2024-11-20 16:12:27.415681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:29.402 [2024-11-20 16:12:27.415690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.415767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.415779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.402 [2024-11-20 16:12:27.415788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:29.402 [2024-11-20 16:12:27.415796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.415817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.415826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.402 [2024-11-20 16:12:27.415834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.402 [2024-11-20 16:12:27.415843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.415876] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.402 [2024-11-20 16:12:27.415886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.415895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.402 [2024-11-20 16:12:27.415904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:29.402 [2024-11-20 16:12:27.415912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.439531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.439559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.402 [2024-11-20 16:12:27.439574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.602 ms 00:24:29.402 [2024-11-20 16:12:27.439583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.439652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.402 [2024-11-20 16:12:27.439662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.402 [2024-11-20 16:12:27.439670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:29.402 [2024-11-20 16:12:27.439677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.402 [2024-11-20 16:12:27.440519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.666 ms, result 0 00:24:30.420  [2024-11-20T16:12:29.627Z] Copying: 9796/1048576 [kB] (9796 kBps) [2024-11-20T16:12:31.017Z] Copying: 21/1024 [MB] (11 MBps) [2024-11-20T16:12:31.956Z] Copying: 30952/1048576 [kB] (9356 kBps) [2024-11-20T16:12:32.901Z] Copying: 44/1024 [MB] (13 MBps) [2024-11-20T16:12:33.842Z] Copying: 57/1024 [MB] (13 MBps) [2024-11-20T16:12:34.781Z] Copying: 68884/1048576 [kB] (9868 kBps) [2024-11-20T16:12:35.723Z] Copying: 78824/1048576 [kB] (9940 kBps) [2024-11-20T16:12:36.666Z] Copying: 88128/1048576 [kB] (9304 kBps) [2024-11-20T16:12:38.052Z] Copying: 98040/1048576 [kB] (9912 kBps) [2024-11-20T16:12:38.624Z] Copying: 107932/1048576 [kB] (9892 kBps) [2024-11-20T16:12:40.008Z] Copying: 117536/1048576 [kB] (9604 kBps) [2024-11-20T16:12:40.951Z] Copying: 127128/1048576 [kB] (9592 kBps) [2024-11-20T16:12:41.916Z] Copying: 137360/1048576 [kB] (10232 kBps) [2024-11-20T16:12:42.893Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-20T16:12:43.860Z] Copying: 158192/1048576 [kB] (9552 kBps) [2024-11-20T16:12:44.800Z] Copying: 168344/1048576 [kB] (10152 kBps) [2024-11-20T16:12:45.741Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-20T16:12:46.684Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-20T16:12:47.624Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-20T16:12:49.009Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-20T16:12:49.958Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-20T16:12:50.901Z] Copying: 226/1024 [MB] (10 MBps) [2024-11-20T16:12:51.844Z] Copying: 239/1024 [MB] (12 MBps) [2024-11-20T16:12:52.786Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-20T16:12:53.728Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-20T16:12:54.670Z] Copying: 277680/1048576 [kB] (10224 kBps) [2024-11-20T16:12:56.052Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-20T16:12:56.635Z] Copying: 298060/1048576 [kB] (10136 kBps) [2024-11-20T16:12:58.019Z] Copying: 307944/1048576 [kB] (9884 kBps) [2024-11-20T16:12:58.963Z] Copying: 317972/1048576 [kB] (10028 kBps) [2024-11-20T16:12:59.906Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-20T16:13:00.848Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-20T16:13:01.792Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-20T16:13:02.731Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-20T16:13:03.673Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-20T16:13:05.060Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-20T16:13:05.632Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-20T16:13:06.625Z] Copying: 403/1024 [MB] (16 MBps) [2024-11-20T16:13:08.014Z] Copying: 417/1024 [MB] (13 MBps) [2024-11-20T16:13:08.957Z] Copying: 431/1024 [MB] (14 MBps) [2024-11-20T16:13:09.898Z] Copying: 444/1024 [MB] (12 MBps) [2024-11-20T16:13:10.840Z] Copying: 457/1024 [MB] (13 MBps) [2024-11-20T16:13:11.782Z] Copying: 471/1024 [MB] (14 MBps) [2024-11-20T16:13:12.725Z] Copying: 489/1024 [MB] (18 MBps) [2024-11-20T16:13:13.668Z] Copying: 504/1024 [MB] (14 MBps) [2024-11-20T16:13:15.049Z] Copying: 516/1024 [MB] (12 MBps) [2024-11-20T16:13:15.619Z] Copying: 539/1024 [MB] (22 MBps) [2024-11-20T16:13:17.000Z] Copying: 553/1024 [MB] (14 MBps) [2024-11-20T16:13:17.978Z] Copying: 568/1024 [MB] (14 MBps) [2024-11-20T16:13:18.919Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-20T16:13:19.864Z] Copying: 589/1024 [MB] (10 MBps) [2024-11-20T16:13:20.805Z] Copying: 600/1024 [MB] (10 MBps) [2024-11-20T16:13:21.745Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-20T16:13:22.683Z] Copying: 626/1024 [MB] (15 MBps) [2024-11-20T16:13:23.624Z] Copying: 653/1024 [MB] (27 MBps) [2024-11-20T16:13:25.006Z] Copying: 681/1024 [MB] (27 MBps) [2024-11-20T16:13:25.947Z] Copying: 699/1024 [MB] (17 MBps) [2024-11-20T16:13:26.889Z] Copying: 716/1024 [MB] (17 MBps) [2024-11-20T16:13:27.833Z] Copying: 729/1024 [MB] (13 MBps) [2024-11-20T16:13:28.775Z] Copying: 739/1024 [MB] (10 MBps) [2024-11-20T16:13:29.717Z] Copying: 749/1024 [MB] (10 MBps) [2024-11-20T16:13:30.661Z] Copying: 761/1024 [MB] (12 MBps) [2024-11-20T16:13:32.048Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-20T16:13:32.624Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-20T16:13:34.012Z] Copying: 811620/1048576 [kB] (10100 kBps) [2024-11-20T16:13:34.953Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-20T16:13:35.897Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-20T16:13:36.841Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-20T16:13:37.785Z] Copying: 836/1024 [MB] (12 MBps) [2024-11-20T16:13:38.724Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-20T16:13:39.667Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-20T16:13:41.054Z] Copying: 875/1024 [MB] (16 MBps) [2024-11-20T16:13:41.627Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-20T16:13:43.032Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-20T16:13:43.642Z] Copying: 929452/1048576 [kB] (10084 kBps) [2024-11-20T16:13:45.032Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-20T16:13:45.975Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-20T16:13:46.918Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-20T16:13:47.863Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-20T16:13:48.861Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-20T16:13:49.802Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-20T16:13:50.746Z] Copying: 983/1024 [MB] (11 MBps) [2024-11-20T16:13:51.687Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-20T16:13:52.628Z] Copying: 1003/1024 [MB] (10 MBps) [2024-11-20T16:13:54.014Z] Copying: 1013/1024 [MB] (10 MBps) [2024-11-20T16:13:54.014Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-20 16:13:53.784396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.784454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:55.764 [2024-11-20 16:13:53.784467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:55.764 [2024-11-20 16:13:53.784476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.784498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:55.764 [2024-11-20 16:13:53.787579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.787622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:55.764 [2024-11-20 16:13:53.787632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:25:55.764 [2024-11-20 16:13:53.787640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.787872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.787883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:55.764 [2024-11-20 16:13:53.787891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:25:55.764 [2024-11-20 16:13:53.787899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.791456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.791477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:55.764 [2024-11-20 16:13:53.791487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:25:55.764 [2024-11-20 16:13:53.791498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.797780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.797807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:55.764 [2024-11-20 16:13:53.797816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:25:55.764 [2024-11-20 16:13:53.797823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.822526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.822558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:55.764 [2024-11-20 16:13:53.822568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.649 ms 00:25:55.764 [2024-11-20 16:13:53.822575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.836069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.836101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:55.764 [2024-11-20 16:13:53.836113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.460 ms 00:25:55.764 [2024-11-20 16:13:53.836120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.836249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.836259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:55.764 [2024-11-20 16:13:53.836268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:55.764 [2024-11-20 16:13:53.836275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.860469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.860498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:55.764 [2024-11-20 16:13:53.860508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.180 ms 00:25:55.764 [2024-11-20 16:13:53.860515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.884491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.884533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:55.764 [2024-11-20 16:13:53.884544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.945 ms 00:25:55.764 [2024-11-20 16:13:53.884552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.907711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.907756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:55.764 [2024-11-20 16:13:53.907766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:25:55.764 [2024-11-20 16:13:53.907774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.930854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.764 [2024-11-20 16:13:53.930885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:55.764 [2024-11-20 16:13:53.930895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.025 ms 00:25:55.764 [2024-11-20 16:13:53.930902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.764 [2024-11-20 16:13:53.930933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:55.764 [2024-11-20 16:13:53.930952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.930966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.930973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.930981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.930989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.930996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:55.764 [2024-11-20 16:13:53.931187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:55.765 [2024-11-20 16:13:53.931709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:55.765 [2024-11-20 16:13:53.931717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794ad26d-746e-41d5-9c76-50f7c33cb882 00:25:55.765 [2024-11-20 16:13:53.931735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:55.765 [2024-11-20 16:13:53.931742] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:55.765 [2024-11-20 16:13:53.931749] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:55.765 [2024-11-20 16:13:53.931756] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:55.765 [2024-11-20 16:13:53.931762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:55.765 [2024-11-20 16:13:53.931770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:55.765 [2024-11-20 16:13:53.931784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:55.765 [2024-11-20 16:13:53.931791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:55.765 [2024-11-20 16:13:53.931798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:55.765 [2024-11-20 16:13:53.931806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.765 [2024-11-20 16:13:53.931813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:55.765 [2024-11-20 16:13:53.931822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:25:55.765 [2024-11-20 16:13:53.931831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.765 [2024-11-20 16:13:53.944253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.765 [2024-11-20 16:13:53.944281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:55.765 [2024-11-20 16:13:53.944291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.407 ms 00:25:55.765 [2024-11-20 16:13:53.944298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.765 [2024-11-20 16:13:53.944645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.765 [2024-11-20 16:13:53.944659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:55.765 [2024-11-20 16:13:53.944672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:25:55.765 [2024-11-20 16:13:53.944679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.765 [2024-11-20 16:13:53.977489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.765 [2024-11-20 16:13:53.977525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.766 [2024-11-20 16:13:53.977535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.766 [2024-11-20 16:13:53.977543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.766 [2024-11-20 16:13:53.977598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.766 [2024-11-20 16:13:53.977606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.766 [2024-11-20 16:13:53.977618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.766 [2024-11-20 16:13:53.977625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.766 [2024-11-20 16:13:53.977679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.766 [2024-11-20 16:13:53.977688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.766 [2024-11-20 16:13:53.977697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.766 [2024-11-20 16:13:53.977705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.766 [2024-11-20 16:13:53.977720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.766 [2024-11-20 16:13:53.977739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.766 [2024-11-20 16:13:53.977747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.766 [2024-11-20 16:13:53.977758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.024 [2024-11-20 16:13:54.054776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.054825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.025 [2024-11-20 16:13:54.054840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.054849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.117715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.117769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.025 [2024-11-20 16:13:54.117785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.117793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.117860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.117869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.025 [2024-11-20 16:13:54.117878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.117885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.117918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.117927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.025 [2024-11-20 16:13:54.117934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.117942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.118029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.118038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.025 [2024-11-20 16:13:54.118046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.118053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.118079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.118088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.025 [2024-11-20 16:13:54.118095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.118103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.118139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.118147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.025 [2024-11-20 16:13:54.118155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.118162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.118198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.025 [2024-11-20 16:13:54.118207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.025 [2024-11-20 16:13:54.118215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.025 [2024-11-20 16:13:54.118222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.025 [2024-11-20 16:13:54.118331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.912 ms, result 0 00:25:56.595 00:25:56.595 00:25:56.595 16:13:54 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:59.138 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:59.138 16:13:56 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:59.138 [2024-11-20 16:13:57.035186] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:25:59.139 [2024-11-20 16:13:57.035309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79348 ] 00:25:59.139 [2024-11-20 16:13:57.196425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.139 [2024-11-20 16:13:57.297806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.400 [2024-11-20 16:13:57.555828] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:59.400 [2024-11-20 16:13:57.555893] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:59.665 [2024-11-20 16:13:57.713624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.665 [2024-11-20 16:13:57.713680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:59.666 [2024-11-20 16:13:57.713695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:59.666 [2024-11-20 16:13:57.713703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.713764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.713775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.666 [2024-11-20 16:13:57.713785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:59.666 [2024-11-20 16:13:57.713792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.713812] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:59.666 [2024-11-20 16:13:57.714551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:59.666 [2024-11-20 16:13:57.714574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.714582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.666 [2024-11-20 16:13:57.714591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:25:59.666 [2024-11-20 16:13:57.714599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.715682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:59.666 [2024-11-20 16:13:57.728205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.728243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:59.666 [2024-11-20 16:13:57.728257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.523 ms 00:25:59.666 [2024-11-20 16:13:57.728265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.728326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.728335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:59.666 [2024-11-20 16:13:57.728343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:59.666 [2024-11-20 16:13:57.728351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.733261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.733292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.666 [2024-11-20 16:13:57.733302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:25:59.666 [2024-11-20 16:13:57.733313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.733381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.733389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.666 [2024-11-20 16:13:57.733397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:59.666 [2024-11-20 16:13:57.733404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.733453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.733463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:59.666 [2024-11-20 16:13:57.733471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:59.666 [2024-11-20 16:13:57.733479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.733503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:59.666 [2024-11-20 16:13:57.736682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.736709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.666 [2024-11-20 16:13:57.736719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:25:59.666 [2024-11-20 16:13:57.736741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.736771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.736780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:59.666 [2024-11-20 16:13:57.736789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:59.666 [2024-11-20 16:13:57.736797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.736818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:59.666 [2024-11-20 16:13:57.736837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:59.666 [2024-11-20 16:13:57.736885] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:59.666 [2024-11-20 16:13:57.736912] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:59.666 [2024-11-20 16:13:57.737016] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:59.666 [2024-11-20 16:13:57.737026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:59.666 [2024-11-20 16:13:57.737036] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:59.666 [2024-11-20 16:13:57.737046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737055] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737063] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:59.666 [2024-11-20 16:13:57.737070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:59.666 [2024-11-20 16:13:57.737077] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:59.666 [2024-11-20 16:13:57.737086] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:59.666 [2024-11-20 16:13:57.737094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.737100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:59.666 [2024-11-20 16:13:57.737108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:59.666 [2024-11-20 16:13:57.737115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.737197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.666 [2024-11-20 16:13:57.737205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:59.666 [2024-11-20 16:13:57.737212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:59.666 [2024-11-20 16:13:57.737219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.666 [2024-11-20 16:13:57.737322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:59.666 [2024-11-20 16:13:57.737338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:59.666 [2024-11-20 16:13:57.737347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:59.666 [2024-11-20 16:13:57.737369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:59.666 [2024-11-20 16:13:57.737390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.666 [2024-11-20 16:13:57.737403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:59.666 [2024-11-20 16:13:57.737410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:59.666 [2024-11-20 16:13:57.737417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.666 [2024-11-20 16:13:57.737423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:59.666 [2024-11-20 16:13:57.737430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:59.666 [2024-11-20 16:13:57.737444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:59.666 [2024-11-20 16:13:57.737457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:59.666 [2024-11-20 16:13:57.737477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:59.666 [2024-11-20 16:13:57.737496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:59.666 [2024-11-20 16:13:57.737516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:59.666 [2024-11-20 16:13:57.737534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.666 [2024-11-20 16:13:57.737546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:59.666 [2024-11-20 16:13:57.737553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.666 [2024-11-20 16:13:57.737565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:59.666 [2024-11-20 16:13:57.737572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:59.666 [2024-11-20 16:13:57.737579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.666 [2024-11-20 16:13:57.737585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:59.666 [2024-11-20 16:13:57.737592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:59.666 [2024-11-20 16:13:57.737598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.666 [2024-11-20 16:13:57.737605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:59.666 [2024-11-20 16:13:57.737611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:59.667 [2024-11-20 16:13:57.737617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.667 [2024-11-20 16:13:57.737623] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:59.667 [2024-11-20 16:13:57.737631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:59.667 [2024-11-20 16:13:57.737638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.667 [2024-11-20 16:13:57.737645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.667 [2024-11-20 16:13:57.737652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:59.667 [2024-11-20 16:13:57.737659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:59.667 [2024-11-20 16:13:57.737666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:59.667 [2024-11-20 16:13:57.737672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:59.667 [2024-11-20 16:13:57.737679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:59.667 [2024-11-20 16:13:57.737685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:59.667 [2024-11-20 16:13:57.737693] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:59.667 [2024-11-20 16:13:57.737702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:59.667 [2024-11-20 16:13:57.737718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:59.667 [2024-11-20 16:13:57.737737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:59.667 [2024-11-20 16:13:57.737744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:59.667 [2024-11-20 16:13:57.737751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:59.667 [2024-11-20 16:13:57.737758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:59.667 [2024-11-20 16:13:57.737766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:59.667 [2024-11-20 16:13:57.737773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:59.667 [2024-11-20 16:13:57.737780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:59.667 [2024-11-20 16:13:57.737787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:59.667 [2024-11-20 16:13:57.737823] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:59.667 [2024-11-20 16:13:57.737834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.667 [2024-11-20 16:13:57.737849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:59.667 [2024-11-20 16:13:57.737856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:59.667 [2024-11-20 16:13:57.737863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:59.667 [2024-11-20 16:13:57.737881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.737889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:59.667 [2024-11-20 16:13:57.737896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:25:59.667 [2024-11-20 16:13:57.737903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.763535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.763574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.667 [2024-11-20 16:13:57.763585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.577 ms 00:25:59.667 [2024-11-20 16:13:57.763592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.763676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.763685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:59.667 [2024-11-20 16:13:57.763693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:59.667 [2024-11-20 16:13:57.763700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.811716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.811766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.667 [2024-11-20 16:13:57.811779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.944 ms 00:25:59.667 [2024-11-20 16:13:57.811787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.811839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.811849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.667 [2024-11-20 16:13:57.811861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.667 [2024-11-20 16:13:57.811868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.812242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.812267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.667 [2024-11-20 16:13:57.812276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:59.667 [2024-11-20 16:13:57.812283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.812412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.812422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.667 [2024-11-20 16:13:57.812430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:59.667 [2024-11-20 16:13:57.812442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.825623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.825662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.667 [2024-11-20 16:13:57.825676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:25:59.667 [2024-11-20 16:13:57.825684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.838997] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:59.667 [2024-11-20 16:13:57.839036] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:59.667 [2024-11-20 16:13:57.839048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.839057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:59.667 [2024-11-20 16:13:57.839067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:25:59.667 [2024-11-20 16:13:57.839074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.863539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.863578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:59.667 [2024-11-20 16:13:57.863589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:25:59.667 [2024-11-20 16:13:57.863597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.875860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.875900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:59.667 [2024-11-20 16:13:57.875911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:25:59.667 [2024-11-20 16:13:57.875918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.888001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.888035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:59.667 [2024-11-20 16:13:57.888045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.047 ms 00:25:59.667 [2024-11-20 16:13:57.888052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.667 [2024-11-20 16:13:57.888660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.667 [2024-11-20 16:13:57.888684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:59.667 [2024-11-20 16:13:57.888693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:25:59.667 [2024-11-20 16:13:57.888703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.944969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.945026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:59.929 [2024-11-20 16:13:57.945044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.246 ms 00:25:59.929 [2024-11-20 16:13:57.945052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.955454] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:59.929 [2024-11-20 16:13:57.957949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.957992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:59.929 [2024-11-20 16:13:57.958010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:25:59.929 [2024-11-20 16:13:57.958023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.958117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.958128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:59.929 [2024-11-20 16:13:57.958136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:59.929 [2024-11-20 16:13:57.958146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.958211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.958221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:59.929 [2024-11-20 16:13:57.958230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:59.929 [2024-11-20 16:13:57.958237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.958255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.958263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:59.929 [2024-11-20 16:13:57.958271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:59.929 [2024-11-20 16:13:57.958278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.958309] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:59.929 [2024-11-20 16:13:57.958319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.958326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:59.929 [2024-11-20 16:13:57.958335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:59.929 [2024-11-20 16:13:57.958342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.982113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.982147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:59.929 [2024-11-20 16:13:57.982158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:25:59.929 [2024-11-20 16:13:57.982170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.982238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.929 [2024-11-20 16:13:57.982247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:59.929 [2024-11-20 16:13:57.982256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:59.929 [2024-11-20 16:13:57.982263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.929 [2024-11-20 16:13:57.983895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.836 ms, result 0 00:26:00.873  [2024-11-20T16:14:00.065Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T16:14:01.006Z] Copying: 20808/1048576 [kB] (10064 kBps) [2024-11-20T16:14:02.386Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-20T16:14:03.328Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-20T16:14:04.270Z] Copying: 52944/1048576 [kB] (10204 kBps) [2024-11-20T16:14:05.214Z] Copying: 62872/1048576 [kB] (9928 kBps) [2024-11-20T16:14:06.158Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-20T16:14:07.100Z] Copying: 83724/1048576 [kB] (9924 kBps) [2024-11-20T16:14:08.040Z] Copying: 93404/1048576 [kB] (9680 kBps) [2024-11-20T16:14:09.423Z] Copying: 102/1024 [MB] (10 MBps) [2024-11-20T16:14:10.366Z] Copying: 114/1024 [MB] (12 MBps) [2024-11-20T16:14:11.314Z] Copying: 126852/1048576 [kB] (9660 kBps) [2024-11-20T16:14:12.310Z] Copying: 135/1024 [MB] (12 MBps) [2024-11-20T16:14:13.250Z] Copying: 148/1024 [MB] (12 MBps) [2024-11-20T16:14:14.192Z] Copying: 158/1024 [MB] (10 MBps) [2024-11-20T16:14:15.137Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-20T16:14:16.147Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-20T16:14:17.091Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-20T16:14:18.035Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-20T16:14:19.423Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-20T16:14:20.368Z] Copying: 227256/1048576 [kB] (10160 kBps) [2024-11-20T16:14:21.314Z] Copying: 237400/1048576 [kB] (10144 kBps) [2024-11-20T16:14:22.260Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-20T16:14:23.244Z] Copying: 257692/1048576 [kB] (10020 kBps) [2024-11-20T16:14:24.189Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-20T16:14:25.133Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-20T16:14:26.078Z] Copying: 282/1024 [MB] (10 MBps) [2024-11-20T16:14:27.019Z] Copying: 293/1024 [MB] (10 MBps) [2024-11-20T16:14:28.404Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-20T16:14:29.355Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-20T16:14:30.301Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-20T16:14:31.241Z] Copying: 336/1024 [MB] (10 MBps) [2024-11-20T16:14:32.184Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-20T16:14:33.125Z] Copying: 358/1024 [MB] (10 MBps) [2024-11-20T16:14:34.068Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-20T16:14:35.012Z] Copying: 379/1024 [MB] (10 MBps) [2024-11-20T16:14:36.397Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-20T16:14:37.340Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-20T16:14:38.283Z] Copying: 411/1024 [MB] (10 MBps) [2024-11-20T16:14:39.227Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-20T16:14:40.170Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-20T16:14:41.113Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-20T16:14:42.197Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-20T16:14:43.142Z] Copying: 472/1024 [MB] (20 MBps) [2024-11-20T16:14:44.085Z] Copying: 490/1024 [MB] (17 MBps) [2024-11-20T16:14:45.028Z] Copying: 511/1024 [MB] (20 MBps) [2024-11-20T16:14:46.413Z] Copying: 528/1024 [MB] (16 MBps) [2024-11-20T16:14:47.353Z] Copying: 549/1024 [MB] (21 MBps) [2024-11-20T16:14:48.294Z] Copying: 573/1024 [MB] (24 MBps) [2024-11-20T16:14:49.232Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-20T16:14:50.172Z] Copying: 603/1024 [MB] (18 MBps) [2024-11-20T16:14:51.111Z] Copying: 622/1024 [MB] (19 MBps) [2024-11-20T16:14:52.053Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-20T16:14:53.438Z] Copying: 663/1024 [MB] (21 MBps) [2024-11-20T16:14:54.008Z] Copying: 678/1024 [MB] (15 MBps) [2024-11-20T16:14:55.389Z] Copying: 694/1024 [MB] (16 MBps) [2024-11-20T16:14:56.332Z] Copying: 723/1024 [MB] (28 MBps) [2024-11-20T16:14:57.274Z] Copying: 744/1024 [MB] (21 MBps) [2024-11-20T16:14:58.213Z] Copying: 765/1024 [MB] (21 MBps) [2024-11-20T16:14:59.147Z] Copying: 794/1024 [MB] (28 MBps) [2024-11-20T16:15:00.088Z] Copying: 825/1024 [MB] (31 MBps) [2024-11-20T16:15:01.020Z] Copying: 871/1024 [MB] (45 MBps) [2024-11-20T16:15:02.391Z] Copying: 916/1024 [MB] (45 MBps) [2024-11-20T16:15:03.355Z] Copying: 964/1024 [MB] (47 MBps) [2024-11-20T16:15:03.355Z] Copying: 1011/1024 [MB] (47 MBps) [2024-11-20T16:15:03.355Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 16:15:03.269437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.269488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:05.105 [2024-11-20 16:15:03.269502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:05.105 [2024-11-20 16:15:03.269510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.269530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.105 [2024-11-20 16:15:03.272139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.272170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:05.105 [2024-11-20 16:15:03.272185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:27:05.105 [2024-11-20 16:15:03.272194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.273659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.273691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:05.105 [2024-11-20 16:15:03.273701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:27:05.105 [2024-11-20 16:15:03.273708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.285913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.285948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.105 [2024-11-20 16:15:03.285958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.190 ms 00:27:05.105 [2024-11-20 16:15:03.285970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.292110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.292136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.105 [2024-11-20 16:15:03.292145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:27:05.105 [2024-11-20 16:15:03.292152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.314848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.314880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.105 [2024-11-20 16:15:03.314890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:27:05.105 [2024-11-20 16:15:03.314898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.328192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.328222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.105 [2024-11-20 16:15:03.328233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 00:27:05.105 [2024-11-20 16:15:03.328241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.328563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.328590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:05.105 [2024-11-20 16:15:03.328599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:05.105 [2024-11-20 16:15:03.328607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.105 [2024-11-20 16:15:03.350975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.105 [2024-11-20 16:15:03.351005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:05.105 [2024-11-20 16:15:03.351015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.354 ms 00:27:05.105 [2024-11-20 16:15:03.351022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.365 [2024-11-20 16:15:03.373391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.365 [2024-11-20 16:15:03.373443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:05.365 [2024-11-20 16:15:03.373453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.339 ms 00:27:05.365 [2024-11-20 16:15:03.373461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.365 [2024-11-20 16:15:03.395063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.365 [2024-11-20 16:15:03.395093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:05.366 [2024-11-20 16:15:03.395102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.573 ms 00:27:05.366 [2024-11-20 16:15:03.395110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.366 [2024-11-20 16:15:03.416962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.366 [2024-11-20 16:15:03.416992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:05.366 [2024-11-20 16:15:03.417002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.801 ms 00:27:05.366 [2024-11-20 16:15:03.417009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.366 [2024-11-20 16:15:03.417039] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:05.366 [2024-11-20 16:15:03.417052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 256 / 261120 wr_cnt: 1 state: open 00:27:05.366 [2024-11-20 16:15:03.417067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:05.366 [2024-11-20 16:15:03.417609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:05.367 [2024-11-20 16:15:03.417805] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:05.367 [2024-11-20 16:15:03.417815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794ad26d-746e-41d5-9c76-50f7c33cb882 00:27:05.367 [2024-11-20 16:15:03.417822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 256 00:27:05.367 [2024-11-20 16:15:03.417829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1216 00:27:05.367 [2024-11-20 16:15:03.417837] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 256 00:27:05.367 [2024-11-20 16:15:03.417845] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 4.7500 00:27:05.367 [2024-11-20 16:15:03.417852] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:05.367 [2024-11-20 16:15:03.417859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:05.367 [2024-11-20 16:15:03.417872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:05.367 [2024-11-20 16:15:03.417878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:05.367 [2024-11-20 16:15:03.417885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:05.367 [2024-11-20 16:15:03.417892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.367 [2024-11-20 16:15:03.417900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:05.367 [2024-11-20 16:15:03.417908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:27:05.367 [2024-11-20 16:15:03.417915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.429992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.367 [2024-11-20 16:15:03.430020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:05.367 [2024-11-20 16:15:03.430029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:27:05.367 [2024-11-20 16:15:03.430037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.430359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.367 [2024-11-20 16:15:03.430377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:05.367 [2024-11-20 16:15:03.430385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:27:05.367 [2024-11-20 16:15:03.430392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.462691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.462741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:05.367 [2024-11-20 16:15:03.462751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.462759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.462813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.462825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.367 [2024-11-20 16:15:03.462832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.462839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.462904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.462914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.367 [2024-11-20 16:15:03.462921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.462928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.462942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.462950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.367 [2024-11-20 16:15:03.462957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.462967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.539699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.539763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.367 [2024-11-20 16:15:03.539775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.539783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.367 [2024-11-20 16:15:03.602367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.367 [2024-11-20 16:15:03.602439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.367 [2024-11-20 16:15:03.602513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.367 [2024-11-20 16:15:03.602750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:05.367 [2024-11-20 16:15:03.602803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.367 [2024-11-20 16:15:03.602862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.602906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.367 [2024-11-20 16:15:03.602916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.367 [2024-11-20 16:15:03.602923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.367 [2024-11-20 16:15:03.602931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.367 [2024-11-20 16:15:03.603036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.575 ms, result 0 00:27:06.741 00:27:06.741 00:27:06.741 16:15:04 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:07.000 [2024-11-20 16:15:05.033363] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:27:07.000 [2024-11-20 16:15:05.033484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80037 ] 00:27:07.000 [2024-11-20 16:15:05.193650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.259 [2024-11-20 16:15:05.288716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.518 [2024-11-20 16:15:05.544104] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:07.518 [2024-11-20 16:15:05.544161] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:07.518 [2024-11-20 16:15:05.697325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.697373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:07.518 [2024-11-20 16:15:05.697389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:07.518 [2024-11-20 16:15:05.697396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.697440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.697450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.518 [2024-11-20 16:15:05.697460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:07.518 [2024-11-20 16:15:05.697467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.697485] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:07.518 [2024-11-20 16:15:05.698186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:07.518 [2024-11-20 16:15:05.698208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.698216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.518 [2024-11-20 16:15:05.698224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:27:07.518 [2024-11-20 16:15:05.698231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.699326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:07.518 [2024-11-20 16:15:05.711464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.711503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:07.518 [2024-11-20 16:15:05.711516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:27:07.518 [2024-11-20 16:15:05.711524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.711587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.711597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:07.518 [2024-11-20 16:15:05.711605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:07.518 [2024-11-20 16:15:05.711612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.716795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.716830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.518 [2024-11-20 16:15:05.716840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.119 ms 00:27:07.518 [2024-11-20 16:15:05.716852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.518 [2024-11-20 16:15:05.716928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.518 [2024-11-20 16:15:05.716943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.518 [2024-11-20 16:15:05.716952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:07.519 [2024-11-20 16:15:05.716959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.716999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.519 [2024-11-20 16:15:05.717017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:07.519 [2024-11-20 16:15:05.717025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:07.519 [2024-11-20 16:15:05.717035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.717065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:07.519 [2024-11-20 16:15:05.720296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.519 [2024-11-20 16:15:05.720325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.519 [2024-11-20 16:15:05.720334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:27:07.519 [2024-11-20 16:15:05.720344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.720373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.519 [2024-11-20 16:15:05.720381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:07.519 [2024-11-20 16:15:05.720389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:07.519 [2024-11-20 16:15:05.720396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.720415] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:07.519 [2024-11-20 16:15:05.720432] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:07.519 [2024-11-20 16:15:05.720466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:07.519 [2024-11-20 16:15:05.720483] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:07.519 [2024-11-20 16:15:05.720584] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:07.519 [2024-11-20 16:15:05.720601] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:07.519 [2024-11-20 16:15:05.720611] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:07.519 [2024-11-20 16:15:05.720621] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:07.519 [2024-11-20 16:15:05.720629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:07.519 [2024-11-20 16:15:05.720637] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:07.519 [2024-11-20 16:15:05.720644] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:07.519 [2024-11-20 16:15:05.720651] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:07.519 [2024-11-20 16:15:05.720661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:07.519 [2024-11-20 16:15:05.720668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.519 [2024-11-20 16:15:05.720675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:07.519 [2024-11-20 16:15:05.720682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:27:07.519 [2024-11-20 16:15:05.720689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.720781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.519 [2024-11-20 16:15:05.720793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:07.519 [2024-11-20 16:15:05.720801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:07.519 [2024-11-20 16:15:05.720808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.519 [2024-11-20 16:15:05.720911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:07.519 [2024-11-20 16:15:05.720927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:07.519 [2024-11-20 16:15:05.720935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.519 [2024-11-20 16:15:05.720943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.720950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:07.519 [2024-11-20 16:15:05.720957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.720964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:07.519 [2024-11-20 16:15:05.720970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:07.519 [2024-11-20 16:15:05.720977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:07.519 [2024-11-20 16:15:05.720984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.519 [2024-11-20 16:15:05.720991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:07.519 [2024-11-20 16:15:05.720997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:07.519 [2024-11-20 16:15:05.721003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.519 [2024-11-20 16:15:05.721010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:07.519 [2024-11-20 16:15:05.721017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:07.519 [2024-11-20 16:15:05.721030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:07.519 [2024-11-20 16:15:05.721043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:07.519 [2024-11-20 16:15:05.721062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:07.519 [2024-11-20 16:15:05.721081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:07.519 [2024-11-20 16:15:05.721099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:07.519 [2024-11-20 16:15:05.721118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:07.519 [2024-11-20 16:15:05.721137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.519 [2024-11-20 16:15:05.721149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:07.519 [2024-11-20 16:15:05.721155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:07.519 [2024-11-20 16:15:05.721161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.519 [2024-11-20 16:15:05.721167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:07.519 [2024-11-20 16:15:05.721173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:07.519 [2024-11-20 16:15:05.721179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:07.519 [2024-11-20 16:15:05.721192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:07.519 [2024-11-20 16:15:05.721198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721204] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:07.519 [2024-11-20 16:15:05.721211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:07.519 [2024-11-20 16:15:05.721217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.519 [2024-11-20 16:15:05.721233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:07.519 [2024-11-20 16:15:05.721239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:07.519 [2024-11-20 16:15:05.721246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:07.519 [2024-11-20 16:15:05.721252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:07.519 [2024-11-20 16:15:05.721259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:07.519 [2024-11-20 16:15:05.721265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:07.519 [2024-11-20 16:15:05.721273] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:07.519 [2024-11-20 16:15:05.721282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.519 [2024-11-20 16:15:05.721291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:07.519 [2024-11-20 16:15:05.721298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:07.519 [2024-11-20 16:15:05.721305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:07.519 [2024-11-20 16:15:05.721311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:07.519 [2024-11-20 16:15:05.721318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:07.519 [2024-11-20 16:15:05.721325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:07.519 [2024-11-20 16:15:05.721332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:07.519 [2024-11-20 16:15:05.721339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:07.519 [2024-11-20 16:15:05.721345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:07.519 [2024-11-20 16:15:05.721352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:07.520 [2024-11-20 16:15:05.721386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:07.520 [2024-11-20 16:15:05.721395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:07.520 [2024-11-20 16:15:05.721411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:07.520 [2024-11-20 16:15:05.721417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:07.520 [2024-11-20 16:15:05.721424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:07.520 [2024-11-20 16:15:05.721431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.520 [2024-11-20 16:15:05.721438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:07.520 [2024-11-20 16:15:05.721445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:27:07.520 [2024-11-20 16:15:05.721453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.520 [2024-11-20 16:15:05.747013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.520 [2024-11-20 16:15:05.747049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.520 [2024-11-20 16:15:05.747059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:27:07.520 [2024-11-20 16:15:05.747067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.520 [2024-11-20 16:15:05.747149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.520 [2024-11-20 16:15:05.747157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:07.520 [2024-11-20 16:15:05.747165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:07.520 [2024-11-20 16:15:05.747172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.778 [2024-11-20 16:15:05.790457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.778 [2024-11-20 16:15:05.790501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.778 [2024-11-20 16:15:05.790513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.236 ms 00:27:07.778 [2024-11-20 16:15:05.790520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.778 [2024-11-20 16:15:05.790564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.790575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.779 [2024-11-20 16:15:05.790586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.779 [2024-11-20 16:15:05.790593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.790970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.790994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.779 [2024-11-20 16:15:05.791004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:27:07.779 [2024-11-20 16:15:05.791012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.791130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.791145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.779 [2024-11-20 16:15:05.791153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:07.779 [2024-11-20 16:15:05.791164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.804044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.804074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.779 [2024-11-20 16:15:05.804087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:27:07.779 [2024-11-20 16:15:05.804094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.816496] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:27:07.779 [2024-11-20 16:15:05.816530] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:07.779 [2024-11-20 16:15:05.816542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.816550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:07.779 [2024-11-20 16:15:05.816558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.358 ms 00:27:07.779 [2024-11-20 16:15:05.816564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.841979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.842021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:07.779 [2024-11-20 16:15:05.842032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.377 ms 00:27:07.779 [2024-11-20 16:15:05.842039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.853926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.853965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:07.779 [2024-11-20 16:15:05.853975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:27:07.779 [2024-11-20 16:15:05.853982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.865231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.865270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:07.779 [2024-11-20 16:15:05.865280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.216 ms 00:27:07.779 [2024-11-20 16:15:05.865287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.865886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.865907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:07.779 [2024-11-20 16:15:05.865916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:27:07.779 [2024-11-20 16:15:05.865925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.919921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.919975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:07.779 [2024-11-20 16:15:05.919992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.979 ms 00:27:07.779 [2024-11-20 16:15:05.920000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.930268] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:07.779 [2024-11-20 16:15:05.932587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.932617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:07.779 [2024-11-20 16:15:05.932628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.539 ms 00:27:07.779 [2024-11-20 16:15:05.932636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.932738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.932750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:07.779 [2024-11-20 16:15:05.932758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:07.779 [2024-11-20 16:15:05.932768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.933295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.933327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:07.779 [2024-11-20 16:15:05.933336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:27:07.779 [2024-11-20 16:15:05.933343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.933365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.933374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:07.779 [2024-11-20 16:15:05.933382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:07.779 [2024-11-20 16:15:05.933389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.933423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:07.779 [2024-11-20 16:15:05.933433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.933441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:07.779 [2024-11-20 16:15:05.933448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:07.779 [2024-11-20 16:15:05.933455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.955990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.956022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:07.779 [2024-11-20 16:15:05.956033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.519 ms 00:27:07.779 [2024-11-20 16:15:05.956045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.956116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.779 [2024-11-20 16:15:05.956125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:07.779 [2024-11-20 16:15:05.956133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:07.779 [2024-11-20 16:15:05.956139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.779 [2024-11-20 16:15:05.957184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.459 ms, result 0 00:27:09.154  [2024-11-20T16:15:08.342Z] Copying: 936/1048576 [kB] (936 kBps) [2024-11-20T16:15:09.279Z] Copying: 35/1024 [MB] (34 MBps) [2024-11-20T16:15:10.210Z] Copying: 61/1024 [MB] (25 MBps) [2024-11-20T16:15:11.143Z] Copying: 111/1024 [MB] (50 MBps) [2024-11-20T16:15:12.514Z] Copying: 157/1024 [MB] (46 MBps) [2024-11-20T16:15:13.445Z] Copying: 204/1024 [MB] (47 MBps) [2024-11-20T16:15:14.377Z] Copying: 253/1024 [MB] (48 MBps) [2024-11-20T16:15:15.311Z] Copying: 299/1024 [MB] (46 MBps) [2024-11-20T16:15:16.243Z] Copying: 347/1024 [MB] (48 MBps) [2024-11-20T16:15:17.177Z] Copying: 394/1024 [MB] (46 MBps) [2024-11-20T16:15:18.552Z] Copying: 443/1024 [MB] (49 MBps) [2024-11-20T16:15:19.486Z] Copying: 493/1024 [MB] (50 MBps) [2024-11-20T16:15:20.474Z] Copying: 539/1024 [MB] (45 MBps) [2024-11-20T16:15:21.414Z] Copying: 578/1024 [MB] (39 MBps) [2024-11-20T16:15:22.359Z] Copying: 611/1024 [MB] (32 MBps) [2024-11-20T16:15:23.296Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-20T16:15:24.230Z] Copying: 668/1024 [MB] (32 MBps) [2024-11-20T16:15:25.161Z] Copying: 710/1024 [MB] (42 MBps) [2024-11-20T16:15:26.531Z] Copying: 759/1024 [MB] (48 MBps) [2024-11-20T16:15:27.461Z] Copying: 805/1024 [MB] (46 MBps) [2024-11-20T16:15:28.394Z] Copying: 848/1024 [MB] (43 MBps) [2024-11-20T16:15:29.329Z] Copying: 893/1024 [MB] (44 MBps) [2024-11-20T16:15:30.263Z] Copying: 939/1024 [MB] (46 MBps) [2024-11-20T16:15:31.198Z] Copying: 985/1024 [MB] (45 MBps) [2024-11-20T16:15:32.132Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-20 16:15:31.951785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.951844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.882 [2024-11-20 16:15:31.951857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.882 [2024-11-20 16:15:31.951866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:31.951891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.882 [2024-11-20 16:15:31.954571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.954605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.882 [2024-11-20 16:15:31.954615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:27:33.882 [2024-11-20 16:15:31.954623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:31.954849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.954865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.882 [2024-11-20 16:15:31.954874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:27:33.882 [2024-11-20 16:15:31.954881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:31.965595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.965632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.882 [2024-11-20 16:15:31.965644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.696 ms 00:27:33.882 [2024-11-20 16:15:31.965652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:31.971841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.971883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.882 [2024-11-20 16:15:31.971893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:27:33.882 [2024-11-20 16:15:31.971900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:31.997839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:31.997875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.882 [2024-11-20 16:15:31.997886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.898 ms 00:27:33.882 [2024-11-20 16:15:31.997895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:32.013011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:32.013045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.882 [2024-11-20 16:15:32.013062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.084 ms 00:27:33.882 [2024-11-20 16:15:32.013071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:32.077503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:32.077569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.882 [2024-11-20 16:15:32.077583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.392 ms 00:27:33.882 [2024-11-20 16:15:32.077591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:32.101321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:32.101358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.882 [2024-11-20 16:15:32.101369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.715 ms 00:27:33.882 [2024-11-20 16:15:32.101377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.882 [2024-11-20 16:15:32.124719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.882 [2024-11-20 16:15:32.124758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.882 [2024-11-20 16:15:32.124777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.309 ms 00:27:33.882 [2024-11-20 16:15:32.124785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.142 [2024-11-20 16:15:32.147283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.142 [2024-11-20 16:15:32.147314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.142 [2024-11-20 16:15:32.147326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.467 ms 00:27:34.142 [2024-11-20 16:15:32.147334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.142 [2024-11-20 16:15:32.170048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.142 [2024-11-20 16:15:32.170082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.142 [2024-11-20 16:15:32.170092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.661 ms 00:27:34.142 [2024-11-20 16:15:32.170100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.142 [2024-11-20 16:15:32.170131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.142 [2024-11-20 16:15:32.170145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:27:34.142 [2024-11-20 16:15:32.170155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.142 [2024-11-20 16:15:32.170163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.142 [2024-11-20 16:15:32.170170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.143 [2024-11-20 16:15:32.170676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.144 [2024-11-20 16:15:32.170902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.144 [2024-11-20 16:15:32.170909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794ad26d-746e-41d5-9c76-50f7c33cb882 00:27:34.144 [2024-11-20 16:15:32.170917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:27:34.144 [2024-11-20 16:15:32.170924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:27:34.144 [2024-11-20 16:15:32.170931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:27:34.144 [2024-11-20 16:15:32.170939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:27:34.144 [2024-11-20 16:15:32.170946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.144 [2024-11-20 16:15:32.170953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.144 [2024-11-20 16:15:32.170964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.144 [2024-11-20 16:15:32.170976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.144 [2024-11-20 16:15:32.170983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.144 [2024-11-20 16:15:32.170990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.144 [2024-11-20 16:15:32.170997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.144 [2024-11-20 16:15:32.171006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:27:34.144 [2024-11-20 16:15:32.171013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.183241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.144 [2024-11-20 16:15:32.183271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.144 [2024-11-20 16:15:32.183280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:27:34.144 [2024-11-20 16:15:32.183289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.183644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.144 [2024-11-20 16:15:32.183659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.144 [2024-11-20 16:15:32.183667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:27:34.144 [2024-11-20 16:15:32.183675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.216240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.216279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.144 [2024-11-20 16:15:32.216294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.216302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.216363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.216371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.144 [2024-11-20 16:15:32.216378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.216386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.216441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.216450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.144 [2024-11-20 16:15:32.216457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.216468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.216482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.216490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.144 [2024-11-20 16:15:32.216497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.216504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.294088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.294128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.144 [2024-11-20 16:15:32.294140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.294152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.357622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.357669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.144 [2024-11-20 16:15:32.357680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.357688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.144 [2024-11-20 16:15:32.357773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.144 [2024-11-20 16:15:32.357783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.144 [2024-11-20 16:15:32.357791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.144 [2024-11-20 16:15:32.357798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.357848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.145 [2024-11-20 16:15:32.357857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.145 [2024-11-20 16:15:32.357865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.145 [2024-11-20 16:15:32.357872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.357956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.145 [2024-11-20 16:15:32.357966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.145 [2024-11-20 16:15:32.357974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.145 [2024-11-20 16:15:32.357981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.358007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.145 [2024-11-20 16:15:32.358018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.145 [2024-11-20 16:15:32.358027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.145 [2024-11-20 16:15:32.358034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.358066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.145 [2024-11-20 16:15:32.358074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.145 [2024-11-20 16:15:32.358082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.145 [2024-11-20 16:15:32.358089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.358127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.145 [2024-11-20 16:15:32.358138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.145 [2024-11-20 16:15:32.358145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.145 [2024-11-20 16:15:32.358152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.145 [2024-11-20 16:15:32.358257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.448 ms, result 0 00:27:35.618 00:27:35.618 00:27:35.618 16:15:33 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:37.518 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:37.518 16:15:35 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:37.518 16:15:35 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:37.518 16:15:35 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:37.518 16:15:35 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77104 00:27:37.519 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77104 ']' 00:27:37.519 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77104 00:27:37.519 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77104) - No such process 00:27:37.519 Process with pid 77104 is not found 00:27:37.519 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77104 is not found' 00:27:37.519 Remove shared memory files 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:37.519 16:15:35 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:37.519 ************************************ 00:27:37.519 END TEST ftl_restore 00:27:37.519 ************************************ 00:27:37.519 00:27:37.519 real 5m17.537s 00:27:37.519 user 5m4.936s 00:27:37.519 sys 0m12.056s 00:27:37.519 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.519 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:37.519 16:15:35 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:37.519 16:15:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:37.519 16:15:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.519 16:15:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:37.519 ************************************ 00:27:37.519 START TEST ftl_dirty_shutdown 00:27:37.519 ************************************ 00:27:37.519 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:37.778 * Looking for test storage... 00:27:37.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:37.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.778 --rc genhtml_branch_coverage=1 00:27:37.778 --rc genhtml_function_coverage=1 00:27:37.778 --rc genhtml_legend=1 00:27:37.778 --rc geninfo_all_blocks=1 00:27:37.778 --rc geninfo_unexecuted_blocks=1 00:27:37.778 00:27:37.778 ' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:37.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.778 --rc genhtml_branch_coverage=1 00:27:37.778 --rc genhtml_function_coverage=1 00:27:37.778 --rc genhtml_legend=1 00:27:37.778 --rc geninfo_all_blocks=1 00:27:37.778 --rc geninfo_unexecuted_blocks=1 00:27:37.778 00:27:37.778 ' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:37.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.778 --rc genhtml_branch_coverage=1 00:27:37.778 --rc genhtml_function_coverage=1 00:27:37.778 --rc genhtml_legend=1 00:27:37.778 --rc geninfo_all_blocks=1 00:27:37.778 --rc geninfo_unexecuted_blocks=1 00:27:37.778 00:27:37.778 ' 00:27:37.778 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:37.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.778 --rc genhtml_branch_coverage=1 00:27:37.778 --rc genhtml_function_coverage=1 00:27:37.778 --rc genhtml_legend=1 00:27:37.778 --rc geninfo_all_blocks=1 00:27:37.778 --rc geninfo_unexecuted_blocks=1 00:27:37.778 00:27:37.778 ' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80416 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80416 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80416 ']' 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.779 16:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.779 [2024-11-20 16:15:35.952178] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:27:37.779 [2024-11-20 16:15:35.952895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80416 ] 00:27:38.037 [2024-11-20 16:15:36.117107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.037 [2024-11-20 16:15:36.217242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:38.603 16:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:38.861 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:39.119 { 00:27:39.119 "name": "nvme0n1", 00:27:39.119 "aliases": [ 00:27:39.119 "a39aab91-9a8d-43a6-b9eb-27a29dc9ab48" 00:27:39.119 ], 00:27:39.119 "product_name": "NVMe disk", 00:27:39.119 "block_size": 4096, 00:27:39.119 "num_blocks": 1310720, 00:27:39.119 "uuid": "a39aab91-9a8d-43a6-b9eb-27a29dc9ab48", 00:27:39.119 "numa_id": -1, 00:27:39.119 "assigned_rate_limits": { 00:27:39.119 "rw_ios_per_sec": 0, 00:27:39.119 "rw_mbytes_per_sec": 0, 00:27:39.119 "r_mbytes_per_sec": 0, 00:27:39.119 "w_mbytes_per_sec": 0 00:27:39.119 }, 00:27:39.119 "claimed": true, 00:27:39.119 "claim_type": "read_many_write_one", 00:27:39.119 "zoned": false, 00:27:39.119 "supported_io_types": { 00:27:39.119 "read": true, 00:27:39.119 "write": true, 00:27:39.119 "unmap": true, 00:27:39.119 "flush": true, 00:27:39.119 "reset": true, 00:27:39.119 "nvme_admin": true, 00:27:39.119 "nvme_io": true, 00:27:39.119 "nvme_io_md": false, 00:27:39.119 "write_zeroes": true, 00:27:39.119 "zcopy": false, 00:27:39.119 "get_zone_info": false, 00:27:39.119 "zone_management": false, 00:27:39.119 "zone_append": false, 00:27:39.119 "compare": true, 00:27:39.119 "compare_and_write": false, 00:27:39.119 "abort": true, 00:27:39.119 "seek_hole": false, 00:27:39.119 "seek_data": false, 00:27:39.119 "copy": true, 00:27:39.119 "nvme_iov_md": false 00:27:39.119 }, 00:27:39.119 "driver_specific": { 00:27:39.119 "nvme": [ 00:27:39.119 { 00:27:39.119 "pci_address": "0000:00:11.0", 00:27:39.119 "trid": { 00:27:39.119 "trtype": "PCIe", 00:27:39.119 "traddr": "0000:00:11.0" 00:27:39.119 }, 00:27:39.119 "ctrlr_data": { 00:27:39.119 "cntlid": 0, 00:27:39.119 "vendor_id": "0x1b36", 00:27:39.119 "model_number": "QEMU NVMe Ctrl", 00:27:39.119 "serial_number": "12341", 00:27:39.119 "firmware_revision": "8.0.0", 00:27:39.119 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:39.119 "oacs": { 00:27:39.119 "security": 0, 00:27:39.119 "format": 1, 00:27:39.119 "firmware": 0, 00:27:39.119 "ns_manage": 1 00:27:39.119 }, 00:27:39.119 "multi_ctrlr": false, 00:27:39.119 "ana_reporting": false 00:27:39.119 }, 00:27:39.119 "vs": { 00:27:39.119 "nvme_version": "1.4" 00:27:39.119 }, 00:27:39.119 "ns_data": { 00:27:39.119 "id": 1, 00:27:39.119 "can_share": false 00:27:39.119 } 00:27:39.119 } 00:27:39.119 ], 00:27:39.119 "mp_policy": "active_passive" 00:27:39.119 } 00:27:39.119 } 00:27:39.119 ]' 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:39.119 16:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:39.120 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:39.120 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:39.120 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:39.378 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:39.378 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:39.378 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e75ea8e7-d933-41ce-9639-1c7f618a92a0 00:27:39.378 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:39.378 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e75ea8e7-d933-41ce-9639-1c7f618a92a0 00:27:39.636 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:39.894 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=c86ebf52-bffa-4ddf-b28f-4dc5addb70cd 00:27:39.894 16:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c86ebf52-bffa-4ddf-b28f-4dc5addb70cd 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:40.153 { 00:27:40.153 "name": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:40.153 "aliases": [ 00:27:40.153 "lvs/nvme0n1p0" 00:27:40.153 ], 00:27:40.153 "product_name": "Logical Volume", 00:27:40.153 "block_size": 4096, 00:27:40.153 "num_blocks": 26476544, 00:27:40.153 "uuid": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:40.153 "assigned_rate_limits": { 00:27:40.153 "rw_ios_per_sec": 0, 00:27:40.153 "rw_mbytes_per_sec": 0, 00:27:40.153 "r_mbytes_per_sec": 0, 00:27:40.153 "w_mbytes_per_sec": 0 00:27:40.153 }, 00:27:40.153 "claimed": false, 00:27:40.153 "zoned": false, 00:27:40.153 "supported_io_types": { 00:27:40.153 "read": true, 00:27:40.153 "write": true, 00:27:40.153 "unmap": true, 00:27:40.153 "flush": false, 00:27:40.153 "reset": true, 00:27:40.153 "nvme_admin": false, 00:27:40.153 "nvme_io": false, 00:27:40.153 "nvme_io_md": false, 00:27:40.153 "write_zeroes": true, 00:27:40.153 "zcopy": false, 00:27:40.153 "get_zone_info": false, 00:27:40.153 "zone_management": false, 00:27:40.153 "zone_append": false, 00:27:40.153 "compare": false, 00:27:40.153 "compare_and_write": false, 00:27:40.153 "abort": false, 00:27:40.153 "seek_hole": true, 00:27:40.153 "seek_data": true, 00:27:40.153 "copy": false, 00:27:40.153 "nvme_iov_md": false 00:27:40.153 }, 00:27:40.153 "driver_specific": { 00:27:40.153 "lvol": { 00:27:40.153 "lvol_store_uuid": "c86ebf52-bffa-4ddf-b28f-4dc5addb70cd", 00:27:40.153 "base_bdev": "nvme0n1", 00:27:40.153 "thin_provision": true, 00:27:40.153 "num_allocated_clusters": 0, 00:27:40.153 "snapshot": false, 00:27:40.153 "clone": false, 00:27:40.153 "esnap_clone": false 00:27:40.153 } 00:27:40.153 } 00:27:40.153 } 00:27:40.153 ]' 00:27:40.153 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:40.411 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:40.670 { 00:27:40.670 "name": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:40.670 "aliases": [ 00:27:40.670 "lvs/nvme0n1p0" 00:27:40.670 ], 00:27:40.670 "product_name": "Logical Volume", 00:27:40.670 "block_size": 4096, 00:27:40.670 "num_blocks": 26476544, 00:27:40.670 "uuid": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:40.670 "assigned_rate_limits": { 00:27:40.670 "rw_ios_per_sec": 0, 00:27:40.670 "rw_mbytes_per_sec": 0, 00:27:40.670 "r_mbytes_per_sec": 0, 00:27:40.670 "w_mbytes_per_sec": 0 00:27:40.670 }, 00:27:40.670 "claimed": false, 00:27:40.670 "zoned": false, 00:27:40.670 "supported_io_types": { 00:27:40.670 "read": true, 00:27:40.670 "write": true, 00:27:40.670 "unmap": true, 00:27:40.670 "flush": false, 00:27:40.670 "reset": true, 00:27:40.670 "nvme_admin": false, 00:27:40.670 "nvme_io": false, 00:27:40.670 "nvme_io_md": false, 00:27:40.670 "write_zeroes": true, 00:27:40.670 "zcopy": false, 00:27:40.670 "get_zone_info": false, 00:27:40.670 "zone_management": false, 00:27:40.670 "zone_append": false, 00:27:40.670 "compare": false, 00:27:40.670 "compare_and_write": false, 00:27:40.670 "abort": false, 00:27:40.670 "seek_hole": true, 00:27:40.670 "seek_data": true, 00:27:40.670 "copy": false, 00:27:40.670 "nvme_iov_md": false 00:27:40.670 }, 00:27:40.670 "driver_specific": { 00:27:40.670 "lvol": { 00:27:40.670 "lvol_store_uuid": "c86ebf52-bffa-4ddf-b28f-4dc5addb70cd", 00:27:40.670 "base_bdev": "nvme0n1", 00:27:40.670 "thin_provision": true, 00:27:40.670 "num_allocated_clusters": 0, 00:27:40.670 "snapshot": false, 00:27:40.670 "clone": false, 00:27:40.670 "esnap_clone": false 00:27:40.670 } 00:27:40.670 } 00:27:40.670 } 00:27:40.670 ]' 00:27:40.670 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:40.928 16:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:40.928 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b79f5b6d-2eea-4b91-bf8c-1fab705ce38d 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:41.186 { 00:27:41.186 "name": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:41.186 "aliases": [ 00:27:41.186 "lvs/nvme0n1p0" 00:27:41.186 ], 00:27:41.186 "product_name": "Logical Volume", 00:27:41.186 "block_size": 4096, 00:27:41.186 "num_blocks": 26476544, 00:27:41.186 "uuid": "b79f5b6d-2eea-4b91-bf8c-1fab705ce38d", 00:27:41.186 "assigned_rate_limits": { 00:27:41.186 "rw_ios_per_sec": 0, 00:27:41.186 "rw_mbytes_per_sec": 0, 00:27:41.186 "r_mbytes_per_sec": 0, 00:27:41.186 "w_mbytes_per_sec": 0 00:27:41.186 }, 00:27:41.186 "claimed": false, 00:27:41.186 "zoned": false, 00:27:41.186 "supported_io_types": { 00:27:41.186 "read": true, 00:27:41.186 "write": true, 00:27:41.186 "unmap": true, 00:27:41.186 "flush": false, 00:27:41.186 "reset": true, 00:27:41.186 "nvme_admin": false, 00:27:41.186 "nvme_io": false, 00:27:41.186 "nvme_io_md": false, 00:27:41.186 "write_zeroes": true, 00:27:41.186 "zcopy": false, 00:27:41.186 "get_zone_info": false, 00:27:41.186 "zone_management": false, 00:27:41.186 "zone_append": false, 00:27:41.186 "compare": false, 00:27:41.186 "compare_and_write": false, 00:27:41.186 "abort": false, 00:27:41.186 "seek_hole": true, 00:27:41.186 "seek_data": true, 00:27:41.186 "copy": false, 00:27:41.186 "nvme_iov_md": false 00:27:41.186 }, 00:27:41.186 "driver_specific": { 00:27:41.186 "lvol": { 00:27:41.186 "lvol_store_uuid": "c86ebf52-bffa-4ddf-b28f-4dc5addb70cd", 00:27:41.186 "base_bdev": "nvme0n1", 00:27:41.186 "thin_provision": true, 00:27:41.186 "num_allocated_clusters": 0, 00:27:41.186 "snapshot": false, 00:27:41.186 "clone": false, 00:27:41.186 "esnap_clone": false 00:27:41.186 } 00:27:41.186 } 00:27:41.186 } 00:27:41.186 ]' 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:41.186 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b79f5b6d-2eea-4b91-bf8c-1fab705ce38d --l2p_dram_limit 10' 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:41.445 16:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b79f5b6d-2eea-4b91-bf8c-1fab705ce38d --l2p_dram_limit 10 -c nvc0n1p0 00:27:41.445 [2024-11-20 16:15:39.626577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.626631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:41.445 [2024-11-20 16:15:39.626645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:41.445 [2024-11-20 16:15:39.626653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.626701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.626709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:41.445 [2024-11-20 16:15:39.626716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:41.445 [2024-11-20 16:15:39.626737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.626755] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:41.445 [2024-11-20 16:15:39.627353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:41.445 [2024-11-20 16:15:39.627375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.627381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:41.445 [2024-11-20 16:15:39.627389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:27:41.445 [2024-11-20 16:15:39.627395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.627422] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6062d0f1-1317-4c50-93e4-ed9106daa74a 00:27:41.445 [2024-11-20 16:15:39.628437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.628455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:41.445 [2024-11-20 16:15:39.628462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:41.445 [2024-11-20 16:15:39.628470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.633656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.633789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:41.445 [2024-11-20 16:15:39.633859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:27:41.445 [2024-11-20 16:15:39.633881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.634010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.634035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:41.445 [2024-11-20 16:15:39.634141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:41.445 [2024-11-20 16:15:39.634164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.634217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.634238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:41.445 [2024-11-20 16:15:39.634254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:41.445 [2024-11-20 16:15:39.634338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.634369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:41.445 [2024-11-20 16:15:39.637336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.637422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:41.445 [2024-11-20 16:15:39.637467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:27:41.445 [2024-11-20 16:15:39.637484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.637521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.637592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:41.445 [2024-11-20 16:15:39.637613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:41.445 [2024-11-20 16:15:39.637628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.637677] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:41.445 [2024-11-20 16:15:39.637809] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:41.445 [2024-11-20 16:15:39.637886] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:41.445 [2024-11-20 16:15:39.637948] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:41.445 [2024-11-20 16:15:39.637977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:41.445 [2024-11-20 16:15:39.638001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:41.445 [2024-11-20 16:15:39.638047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:41.445 [2024-11-20 16:15:39.638063] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:41.445 [2024-11-20 16:15:39.638081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:41.445 [2024-11-20 16:15:39.638096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:41.445 [2024-11-20 16:15:39.638129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.638146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:41.445 [2024-11-20 16:15:39.638211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:27:41.445 [2024-11-20 16:15:39.638236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.638315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.445 [2024-11-20 16:15:39.638358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:41.445 [2024-11-20 16:15:39.638378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:41.445 [2024-11-20 16:15:39.638394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.445 [2024-11-20 16:15:39.638517] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:41.446 [2024-11-20 16:15:39.638571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:41.446 [2024-11-20 16:15:39.638592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:41.446 [2024-11-20 16:15:39.638624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:41.446 [2024-11-20 16:15:39.638659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:41.446 [2024-11-20 16:15:39.638689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:41.446 [2024-11-20 16:15:39.638705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:41.446 [2024-11-20 16:15:39.638749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:41.446 [2024-11-20 16:15:39.638764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:41.446 [2024-11-20 16:15:39.638860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:41.446 [2024-11-20 16:15:39.638878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:41.446 [2024-11-20 16:15:39.638895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:41.446 [2024-11-20 16:15:39.638909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:41.446 [2024-11-20 16:15:39.638943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:41.446 [2024-11-20 16:15:39.638951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:41.446 [2024-11-20 16:15:39.638963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:41.446 [2024-11-20 16:15:39.638975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:41.446 [2024-11-20 16:15:39.638980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:41.446 [2024-11-20 16:15:39.638987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:41.446 [2024-11-20 16:15:39.638992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:41.446 [2024-11-20 16:15:39.638999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:41.446 [2024-11-20 16:15:39.639010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:41.446 [2024-11-20 16:15:39.639015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:41.446 [2024-11-20 16:15:39.639029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:41.446 [2024-11-20 16:15:39.639037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:41.446 [2024-11-20 16:15:39.639049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:41.446 [2024-11-20 16:15:39.639054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:41.446 [2024-11-20 16:15:39.639060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:41.446 [2024-11-20 16:15:39.639066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:41.446 [2024-11-20 16:15:39.639072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:41.446 [2024-11-20 16:15:39.639077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:41.446 [2024-11-20 16:15:39.639089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:41.446 [2024-11-20 16:15:39.639095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:41.446 [2024-11-20 16:15:39.639108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:41.446 [2024-11-20 16:15:39.639114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:41.446 [2024-11-20 16:15:39.639121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:41.446 [2024-11-20 16:15:39.639127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:41.446 [2024-11-20 16:15:39.639135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:41.446 [2024-11-20 16:15:39.639140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:41.446 [2024-11-20 16:15:39.639147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:41.446 [2024-11-20 16:15:39.639152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:41.446 [2024-11-20 16:15:39.639158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:41.446 [2024-11-20 16:15:39.639167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:41.446 [2024-11-20 16:15:39.639176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:41.446 [2024-11-20 16:15:39.639191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:41.446 [2024-11-20 16:15:39.639197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:41.446 [2024-11-20 16:15:39.639204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:41.446 [2024-11-20 16:15:39.639209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:41.446 [2024-11-20 16:15:39.639216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:41.446 [2024-11-20 16:15:39.639222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:41.446 [2024-11-20 16:15:39.639229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:41.446 [2024-11-20 16:15:39.639235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:41.446 [2024-11-20 16:15:39.639244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:41.446 [2024-11-20 16:15:39.639276] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:41.446 [2024-11-20 16:15:39.639284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:41.446 [2024-11-20 16:15:39.639297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:41.446 [2024-11-20 16:15:39.639303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:41.446 [2024-11-20 16:15:39.639310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:41.446 [2024-11-20 16:15:39.639316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.446 [2024-11-20 16:15:39.639323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:41.446 [2024-11-20 16:15:39.639329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:27:41.446 [2024-11-20 16:15:39.639336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.446 [2024-11-20 16:15:39.639368] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:41.446 [2024-11-20 16:15:39.639379] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:43.973 [2024-11-20 16:15:42.220480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.973 [2024-11-20 16:15:42.220638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:43.973 [2024-11-20 16:15:42.220697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2581.101 ms 00:27:43.973 [2024-11-20 16:15:42.220719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.231 [2024-11-20 16:15:42.242224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.231 [2024-11-20 16:15:42.242359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:44.231 [2024-11-20 16:15:42.242408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.262 ms 00:27:44.231 [2024-11-20 16:15:42.242429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.231 [2024-11-20 16:15:42.242547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.231 [2024-11-20 16:15:42.242569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:44.231 [2024-11-20 16:15:42.242586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:44.231 [2024-11-20 16:15:42.242606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.231 [2024-11-20 16:15:42.267070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.231 [2024-11-20 16:15:42.267182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:44.231 [2024-11-20 16:15:42.267227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.425 ms 00:27:44.232 [2024-11-20 16:15:42.267248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.267284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.267304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:44.232 [2024-11-20 16:15:42.267320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:44.232 [2024-11-20 16:15:42.267336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.267683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.267784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:44.232 [2024-11-20 16:15:42.267838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:27:44.232 [2024-11-20 16:15:42.267862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.267981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.268003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:44.232 [2024-11-20 16:15:42.268021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:44.232 [2024-11-20 16:15:42.268039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.279686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.279794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:44.232 [2024-11-20 16:15:42.279840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.624 ms 00:27:44.232 [2024-11-20 16:15:42.279859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.302771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:44.232 [2024-11-20 16:15:42.306188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.306276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:44.232 [2024-11-20 16:15:42.306325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.259 ms 00:27:44.232 [2024-11-20 16:15:42.306343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.365780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.365934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:44.232 [2024-11-20 16:15:42.366000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.394 ms 00:27:44.232 [2024-11-20 16:15:42.366020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.366175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.366201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:44.232 [2024-11-20 16:15:42.366266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:44.232 [2024-11-20 16:15:42.366285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.383854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.383954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:44.232 [2024-11-20 16:15:42.383996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:27:44.232 [2024-11-20 16:15:42.384014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.401422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.401509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:44.232 [2024-11-20 16:15:42.401581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.369 ms 00:27:44.232 [2024-11-20 16:15:42.401597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.402068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.402135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:44.232 [2024-11-20 16:15:42.402175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:27:44.232 [2024-11-20 16:15:42.402193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.461661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.232 [2024-11-20 16:15:42.461775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:44.232 [2024-11-20 16:15:42.461794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.429 ms 00:27:44.232 [2024-11-20 16:15:42.461801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.232 [2024-11-20 16:15:42.480160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.506 [2024-11-20 16:15:42.480254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:44.506 [2024-11-20 16:15:42.480270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.306 ms 00:27:44.506 [2024-11-20 16:15:42.480276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.506 [2024-11-20 16:15:42.498008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.506 [2024-11-20 16:15:42.498034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:44.506 [2024-11-20 16:15:42.498043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:27:44.506 [2024-11-20 16:15:42.498049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.506 [2024-11-20 16:15:42.515917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.506 [2024-11-20 16:15:42.515942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:44.506 [2024-11-20 16:15:42.515952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.839 ms 00:27:44.506 [2024-11-20 16:15:42.515959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.506 [2024-11-20 16:15:42.515989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.506 [2024-11-20 16:15:42.515997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:44.506 [2024-11-20 16:15:42.516006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:44.506 [2024-11-20 16:15:42.516012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.506 [2024-11-20 16:15:42.516073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.506 [2024-11-20 16:15:42.516080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:44.507 [2024-11-20 16:15:42.516090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:44.507 [2024-11-20 16:15:42.516096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.507 [2024-11-20 16:15:42.516809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2889.885 ms, result 0 00:27:44.507 { 00:27:44.507 "name": "ftl0", 00:27:44.507 "uuid": "6062d0f1-1317-4c50-93e4-ed9106daa74a" 00:27:44.507 } 00:27:44.507 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:44.507 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:44.507 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:44.794 /dev/nbd0 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:44.794 1+0 records in 00:27:44.794 1+0 records out 00:27:44.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283722 s, 14.4 MB/s 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:44.794 16:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:44.794 [2024-11-20 16:15:43.041456] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:27:44.794 [2024-11-20 16:15:43.042107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80547 ] 00:27:45.052 [2024-11-20 16:15:43.205211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.311 [2024-11-20 16:15:43.317269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.687  [2024-11-20T16:15:45.871Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-20T16:15:46.805Z] Copying: 390/1024 [MB] (195 MBps) [2024-11-20T16:15:47.740Z] Copying: 645/1024 [MB] (254 MBps) [2024-11-20T16:15:48.306Z] Copying: 891/1024 [MB] (246 MBps) [2024-11-20T16:15:48.872Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:27:50.622 00:27:50.622 16:15:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:52.523 16:15:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:52.782 [2024-11-20 16:15:50.793269] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:27:52.782 [2024-11-20 16:15:50.793366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80635 ] 00:27:52.782 [2024-11-20 16:15:50.946606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.040 [2024-11-20 16:15:51.059847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.451  [2024-11-20T16:15:53.635Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-20T16:15:54.568Z] Copying: 58/1024 [MB] (28 MBps) [2024-11-20T16:15:55.502Z] Copying: 86/1024 [MB] (27 MBps) [2024-11-20T16:15:56.435Z] Copying: 113/1024 [MB] (27 MBps) [2024-11-20T16:15:57.367Z] Copying: 139/1024 [MB] (26 MBps) [2024-11-20T16:15:58.738Z] Copying: 169/1024 [MB] (29 MBps) [2024-11-20T16:15:59.303Z] Copying: 198/1024 [MB] (29 MBps) [2024-11-20T16:16:00.674Z] Copying: 227/1024 [MB] (28 MBps) [2024-11-20T16:16:01.643Z] Copying: 256/1024 [MB] (28 MBps) [2024-11-20T16:16:02.575Z] Copying: 285/1024 [MB] (29 MBps) [2024-11-20T16:16:03.509Z] Copying: 314/1024 [MB] (28 MBps) [2024-11-20T16:16:04.441Z] Copying: 343/1024 [MB] (28 MBps) [2024-11-20T16:16:05.374Z] Copying: 374/1024 [MB] (31 MBps) [2024-11-20T16:16:06.306Z] Copying: 403/1024 [MB] (28 MBps) [2024-11-20T16:16:07.678Z] Copying: 432/1024 [MB] (28 MBps) [2024-11-20T16:16:08.609Z] Copying: 461/1024 [MB] (29 MBps) [2024-11-20T16:16:09.542Z] Copying: 495/1024 [MB] (33 MBps) [2024-11-20T16:16:10.491Z] Copying: 524/1024 [MB] (28 MBps) [2024-11-20T16:16:11.425Z] Copying: 553/1024 [MB] (29 MBps) [2024-11-20T16:16:12.358Z] Copying: 582/1024 [MB] (29 MBps) [2024-11-20T16:16:13.732Z] Copying: 611/1024 [MB] (28 MBps) [2024-11-20T16:16:14.665Z] Copying: 640/1024 [MB] (29 MBps) [2024-11-20T16:16:15.599Z] Copying: 669/1024 [MB] (28 MBps) [2024-11-20T16:16:16.533Z] Copying: 697/1024 [MB] (28 MBps) [2024-11-20T16:16:17.467Z] Copying: 729/1024 [MB] (31 MBps) [2024-11-20T16:16:18.400Z] Copying: 759/1024 [MB] (30 MBps) [2024-11-20T16:16:19.334Z] Copying: 788/1024 [MB] (28 MBps) [2024-11-20T16:16:20.708Z] Copying: 817/1024 [MB] (29 MBps) [2024-11-20T16:16:21.642Z] Copying: 846/1024 [MB] (29 MBps) [2024-11-20T16:16:22.576Z] Copying: 881/1024 [MB] (34 MBps) [2024-11-20T16:16:23.510Z] Copying: 910/1024 [MB] (29 MBps) [2024-11-20T16:16:24.444Z] Copying: 939/1024 [MB] (28 MBps) [2024-11-20T16:16:25.377Z] Copying: 968/1024 [MB] (29 MBps) [2024-11-20T16:16:26.310Z] Copying: 997/1024 [MB] (29 MBps) [2024-11-20T16:16:26.877Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:28:28.627 00:28:28.627 16:16:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:28.627 16:16:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:28.885 16:16:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:28.885 [2024-11-20 16:16:27.126646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.885 [2024-11-20 16:16:27.126695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.885 [2024-11-20 16:16:27.126708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.885 [2024-11-20 16:16:27.126717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.885 [2024-11-20 16:16:27.126753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.885 [2024-11-20 16:16:27.129008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.885 [2024-11-20 16:16:27.129168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.885 [2024-11-20 16:16:27.129185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:28:28.885 [2024-11-20 16:16:27.129193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.885 [2024-11-20 16:16:27.131119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.885 [2024-11-20 16:16:27.131147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.885 [2024-11-20 16:16:27.131157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:28:28.885 [2024-11-20 16:16:27.131163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.144933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.144 [2024-11-20 16:16:27.144964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:29.144 [2024-11-20 16:16:27.144974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.753 ms 00:28:29.144 [2024-11-20 16:16:27.144981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.149779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.144 [2024-11-20 16:16:27.149801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:29.144 [2024-11-20 16:16:27.149812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:28:29.144 [2024-11-20 16:16:27.149819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.168484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.144 [2024-11-20 16:16:27.168622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:29.144 [2024-11-20 16:16:27.168639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.609 ms 00:28:29.144 [2024-11-20 16:16:27.168646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.181131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.144 [2024-11-20 16:16:27.181156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:29.144 [2024-11-20 16:16:27.181167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:28:29.144 [2024-11-20 16:16:27.181177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.181293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.144 [2024-11-20 16:16:27.181301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:29.144 [2024-11-20 16:16:27.181310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:29.144 [2024-11-20 16:16:27.181316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.144 [2024-11-20 16:16:27.199251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.145 [2024-11-20 16:16:27.199354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:29.145 [2024-11-20 16:16:27.199369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.918 ms 00:28:29.145 [2024-11-20 16:16:27.199376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.145 [2024-11-20 16:16:27.217099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.145 [2024-11-20 16:16:27.217123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:29.145 [2024-11-20 16:16:27.217133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:28:29.145 [2024-11-20 16:16:27.217139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.145 [2024-11-20 16:16:27.234132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.145 [2024-11-20 16:16:27.234156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:29.145 [2024-11-20 16:16:27.234165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.960 ms 00:28:29.145 [2024-11-20 16:16:27.234171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.145 [2024-11-20 16:16:27.251347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.145 [2024-11-20 16:16:27.251372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:29.145 [2024-11-20 16:16:27.251382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.118 ms 00:28:29.145 [2024-11-20 16:16:27.251388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.145 [2024-11-20 16:16:27.251417] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:29.145 [2024-11-20 16:16:27.251429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:29.145 [2024-11-20 16:16:27.251976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.251983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.251988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.251996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:29.146 [2024-11-20 16:16:27.252141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:29.146 [2024-11-20 16:16:27.252149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6062d0f1-1317-4c50-93e4-ed9106daa74a 00:28:29.146 [2024-11-20 16:16:27.252156] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:29.146 [2024-11-20 16:16:27.252165] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:29.146 [2024-11-20 16:16:27.252170] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:29.146 [2024-11-20 16:16:27.252180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:29.146 [2024-11-20 16:16:27.252186] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:29.146 [2024-11-20 16:16:27.252194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:29.146 [2024-11-20 16:16:27.252200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:29.146 [2024-11-20 16:16:27.252206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:29.146 [2024-11-20 16:16:27.252211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:29.146 [2024-11-20 16:16:27.252218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.146 [2024-11-20 16:16:27.252224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:29.146 [2024-11-20 16:16:27.252232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:28:29.146 [2024-11-20 16:16:27.252237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.262593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.146 [2024-11-20 16:16:27.262619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:29.146 [2024-11-20 16:16:27.262628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.331 ms 00:28:29.146 [2024-11-20 16:16:27.262635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.262940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.146 [2024-11-20 16:16:27.262952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:29.146 [2024-11-20 16:16:27.262961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:28:29.146 [2024-11-20 16:16:27.262967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.297441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.146 [2024-11-20 16:16:27.297596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.146 [2024-11-20 16:16:27.297613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.146 [2024-11-20 16:16:27.297620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.297677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.146 [2024-11-20 16:16:27.297684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.146 [2024-11-20 16:16:27.297692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.146 [2024-11-20 16:16:27.297698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.297814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.146 [2024-11-20 16:16:27.297826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.146 [2024-11-20 16:16:27.297834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.146 [2024-11-20 16:16:27.297840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.297857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.146 [2024-11-20 16:16:27.297863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.146 [2024-11-20 16:16:27.297871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.146 [2024-11-20 16:16:27.297876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.146 [2024-11-20 16:16:27.362142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.146 [2024-11-20 16:16:27.362188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.146 [2024-11-20 16:16:27.362201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.146 [2024-11-20 16:16:27.362208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.412920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.412965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.404 [2024-11-20 16:16:27.412977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.412984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.413074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:29.404 [2024-11-20 16:16:27.413082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.413091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.413155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:29.404 [2024-11-20 16:16:27.413164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.413170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.413263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:29.404 [2024-11-20 16:16:27.413273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.413281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.413317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:29.404 [2024-11-20 16:16:27.413326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.413332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.404 [2024-11-20 16:16:27.413375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:29.404 [2024-11-20 16:16:27.413383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.404 [2024-11-20 16:16:27.413389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.404 [2024-11-20 16:16:27.413433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.405 [2024-11-20 16:16:27.413442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:29.405 [2024-11-20 16:16:27.413449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.405 [2024-11-20 16:16:27.413456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.405 [2024-11-20 16:16:27.413575] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.891 ms, result 0 00:28:29.405 true 00:28:29.405 16:16:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80416 00:28:29.405 16:16:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80416 00:28:29.405 16:16:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:29.405 [2024-11-20 16:16:27.506592] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:28:29.405 [2024-11-20 16:16:27.506710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81022 ] 00:28:29.663 [2024-11-20 16:16:27.678634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.663 [2024-11-20 16:16:27.798713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.052  [2024-11-20T16:16:30.238Z] Copying: 154/1024 [MB] (154 MBps) [2024-11-20T16:16:31.173Z] Copying: 320/1024 [MB] (165 MBps) [2024-11-20T16:16:32.106Z] Copying: 504/1024 [MB] (184 MBps) [2024-11-20T16:16:33.480Z] Copying: 693/1024 [MB] (188 MBps) [2024-11-20T16:16:33.480Z] Copying: 939/1024 [MB] (246 MBps) [2024-11-20T16:16:34.095Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:28:35.845 00:28:35.845 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80416 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:35.845 16:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:35.845 [2024-11-20 16:16:34.072352] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:28:35.845 [2024-11-20 16:16:34.072628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81092 ] 00:28:36.104 [2024-11-20 16:16:34.228092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.104 [2024-11-20 16:16:34.308827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.362 [2024-11-20 16:16:34.525849] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:36.362 [2024-11-20 16:16:34.526009] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:36.362 [2024-11-20 16:16:34.588638] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:36.362 [2024-11-20 16:16:34.588863] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:36.362 [2024-11-20 16:16:34.589092] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:36.622 [2024-11-20 16:16:34.760634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.622 [2024-11-20 16:16:34.760863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:36.622 [2024-11-20 16:16:34.760882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:36.622 [2024-11-20 16:16:34.760889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.622 [2024-11-20 16:16:34.760954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.622 [2024-11-20 16:16:34.760963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.622 [2024-11-20 16:16:34.760971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:36.622 [2024-11-20 16:16:34.760977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.622 [2024-11-20 16:16:34.760995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:36.622 [2024-11-20 16:16:34.761567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:36.622 [2024-11-20 16:16:34.761580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.622 [2024-11-20 16:16:34.761587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.622 [2024-11-20 16:16:34.761593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:28:36.622 [2024-11-20 16:16:34.761600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.622 [2024-11-20 16:16:34.762575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:36.622 [2024-11-20 16:16:34.772350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.622 [2024-11-20 16:16:34.772467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:36.622 [2024-11-20 16:16:34.772480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.776 ms 00:28:36.622 [2024-11-20 16:16:34.772487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.622 [2024-11-20 16:16:34.772528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.622 [2024-11-20 16:16:34.772535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:36.622 [2024-11-20 16:16:34.772542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:36.622 [2024-11-20 16:16:34.772548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.776986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.777010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.623 [2024-11-20 16:16:34.777018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.393 ms 00:28:36.623 [2024-11-20 16:16:34.777023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.777078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.777085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.623 [2024-11-20 16:16:34.777091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:36.623 [2024-11-20 16:16:34.777097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.777133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.777140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:36.623 [2024-11-20 16:16:34.777147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:36.623 [2024-11-20 16:16:34.777153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.777168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:36.623 [2024-11-20 16:16:34.779882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.779905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.623 [2024-11-20 16:16:34.779912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:28:36.623 [2024-11-20 16:16:34.779918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.779945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.779952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:36.623 [2024-11-20 16:16:34.779959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:36.623 [2024-11-20 16:16:34.779965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.779982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:36.623 [2024-11-20 16:16:34.779996] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:36.623 [2024-11-20 16:16:34.780024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:36.623 [2024-11-20 16:16:34.780036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:36.623 [2024-11-20 16:16:34.780117] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:36.623 [2024-11-20 16:16:34.780126] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:36.623 [2024-11-20 16:16:34.780134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:36.623 [2024-11-20 16:16:34.780143] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:36.623 [2024-11-20 16:16:34.780165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:36.623 [2024-11-20 16:16:34.780171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:36.623 [2024-11-20 16:16:34.780177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:36.623 [2024-11-20 16:16:34.780183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.780189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:36.623 [2024-11-20 16:16:34.780196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:28:36.623 [2024-11-20 16:16:34.780201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.780265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.623 [2024-11-20 16:16:34.780273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:36.623 [2024-11-20 16:16:34.780280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:36.623 [2024-11-20 16:16:34.780285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.623 [2024-11-20 16:16:34.780362] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:36.623 [2024-11-20 16:16:34.780370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:36.623 [2024-11-20 16:16:34.780376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:36.623 [2024-11-20 16:16:34.780394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:36.623 [2024-11-20 16:16:34.780411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.623 [2024-11-20 16:16:34.780423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:36.623 [2024-11-20 16:16:34.780432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:36.623 [2024-11-20 16:16:34.780437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.623 [2024-11-20 16:16:34.780442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:36.623 [2024-11-20 16:16:34.780448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:36.623 [2024-11-20 16:16:34.780454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:36.623 [2024-11-20 16:16:34.780466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:36.623 [2024-11-20 16:16:34.780482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:36.623 [2024-11-20 16:16:34.780499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:36.623 [2024-11-20 16:16:34.780514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:36.623 [2024-11-20 16:16:34.780530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:36.623 [2024-11-20 16:16:34.780546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.623 [2024-11-20 16:16:34.780556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:36.623 [2024-11-20 16:16:34.780562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:36.623 [2024-11-20 16:16:34.780567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.623 [2024-11-20 16:16:34.780572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:36.623 [2024-11-20 16:16:34.780577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:36.623 [2024-11-20 16:16:34.780582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:36.623 [2024-11-20 16:16:34.780592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:36.623 [2024-11-20 16:16:34.780598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780603] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:36.623 [2024-11-20 16:16:34.780609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:36.623 [2024-11-20 16:16:34.780615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.623 [2024-11-20 16:16:34.780629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:36.623 [2024-11-20 16:16:34.780635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:36.623 [2024-11-20 16:16:34.780640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:36.623 [2024-11-20 16:16:34.780646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:36.623 [2024-11-20 16:16:34.780651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:36.623 [2024-11-20 16:16:34.780656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:36.623 [2024-11-20 16:16:34.780663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:36.623 [2024-11-20 16:16:34.780670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.623 [2024-11-20 16:16:34.780676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:36.623 [2024-11-20 16:16:34.780682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:36.623 [2024-11-20 16:16:34.780688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:36.623 [2024-11-20 16:16:34.780694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:36.623 [2024-11-20 16:16:34.780699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:36.624 [2024-11-20 16:16:34.780705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:36.624 [2024-11-20 16:16:34.780710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:36.624 [2024-11-20 16:16:34.780716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:36.624 [2024-11-20 16:16:34.780735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:36.624 [2024-11-20 16:16:34.780741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:36.624 [2024-11-20 16:16:34.780770] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:36.624 [2024-11-20 16:16:34.780776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.624 [2024-11-20 16:16:34.780790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:36.624 [2024-11-20 16:16:34.780796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:36.624 [2024-11-20 16:16:34.780802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:36.624 [2024-11-20 16:16:34.780808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.780814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:36.624 [2024-11-20 16:16:34.780820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:28:36.624 [2024-11-20 16:16:34.780826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.802048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.802081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.624 [2024-11-20 16:16:34.802091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.188 ms 00:28:36.624 [2024-11-20 16:16:34.802097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.802163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.802173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:36.624 [2024-11-20 16:16:34.802180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:36.624 [2024-11-20 16:16:34.802186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.839333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.839371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.624 [2024-11-20 16:16:34.839384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.098 ms 00:28:36.624 [2024-11-20 16:16:34.839391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.839436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.839444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.624 [2024-11-20 16:16:34.839451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.624 [2024-11-20 16:16:34.839457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.839810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.839824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.624 [2024-11-20 16:16:34.839832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:28:36.624 [2024-11-20 16:16:34.839839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.839963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.839971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.624 [2024-11-20 16:16:34.839978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:36.624 [2024-11-20 16:16:34.839985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.850561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.850587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.624 [2024-11-20 16:16:34.850595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.558 ms 00:28:36.624 [2024-11-20 16:16:34.850602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.624 [2024-11-20 16:16:34.860760] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:36.624 [2024-11-20 16:16:34.860794] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:36.624 [2024-11-20 16:16:34.860806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.624 [2024-11-20 16:16:34.860813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:36.624 [2024-11-20 16:16:34.860822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.127 ms 00:28:36.624 [2024-11-20 16:16:34.860829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.880071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.880102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:36.885 [2024-11-20 16:16:34.880120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.202 ms 00:28:36.885 [2024-11-20 16:16:34.880126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.889448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.889483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:36.885 [2024-11-20 16:16:34.889493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.288 ms 00:28:36.885 [2024-11-20 16:16:34.889500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.898601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.898780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:36.885 [2024-11-20 16:16:34.898796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.066 ms 00:28:36.885 [2024-11-20 16:16:34.898802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.899576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.899609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:36.885 [2024-11-20 16:16:34.899619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:28:36.885 [2024-11-20 16:16:34.899626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.944782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.944830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:36.885 [2024-11-20 16:16:34.944841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.139 ms 00:28:36.885 [2024-11-20 16:16:34.944848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.953071] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:36.885 [2024-11-20 16:16:34.955240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.955266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:36.885 [2024-11-20 16:16:34.955275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.348 ms 00:28:36.885 [2024-11-20 16:16:34.955282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.955355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.955363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:36.885 [2024-11-20 16:16:34.955371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:36.885 [2024-11-20 16:16:34.955377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.955445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.955453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:36.885 [2024-11-20 16:16:34.955460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:36.885 [2024-11-20 16:16:34.955466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.955481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.955490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:36.885 [2024-11-20 16:16:34.955497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:36.885 [2024-11-20 16:16:34.955503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.955528] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:36.885 [2024-11-20 16:16:34.955535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.885 [2024-11-20 16:16:34.955541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:36.885 [2024-11-20 16:16:34.955548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:36.885 [2024-11-20 16:16:34.955554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.885 [2024-11-20 16:16:34.973780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.886 [2024-11-20 16:16:34.973926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:36.886 [2024-11-20 16:16:34.973942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.209 ms 00:28:36.886 [2024-11-20 16:16:34.973949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.886 [2024-11-20 16:16:34.974013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.886 [2024-11-20 16:16:34.974021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:36.886 [2024-11-20 16:16:34.974028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:36.886 [2024-11-20 16:16:34.974034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.886 [2024-11-20 16:16:34.974896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.877 ms, result 0 00:28:37.826  [2024-11-20T16:16:37.007Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T16:16:38.389Z] Copying: 66/1024 [MB] (40 MBps) [2024-11-20T16:16:39.330Z] Copying: 95/1024 [MB] (29 MBps) [2024-11-20T16:16:40.273Z] Copying: 122/1024 [MB] (26 MBps) [2024-11-20T16:16:41.212Z] Copying: 150/1024 [MB] (27 MBps) [2024-11-20T16:16:42.156Z] Copying: 176/1024 [MB] (26 MBps) [2024-11-20T16:16:43.095Z] Copying: 200/1024 [MB] (24 MBps) [2024-11-20T16:16:44.030Z] Copying: 227/1024 [MB] (26 MBps) [2024-11-20T16:16:45.404Z] Copying: 262/1024 [MB] (35 MBps) [2024-11-20T16:16:46.337Z] Copying: 307/1024 [MB] (44 MBps) [2024-11-20T16:16:47.272Z] Copying: 351/1024 [MB] (43 MBps) [2024-11-20T16:16:48.205Z] Copying: 394/1024 [MB] (43 MBps) [2024-11-20T16:16:49.137Z] Copying: 439/1024 [MB] (44 MBps) [2024-11-20T16:16:50.070Z] Copying: 483/1024 [MB] (44 MBps) [2024-11-20T16:16:51.005Z] Copying: 528/1024 [MB] (45 MBps) [2024-11-20T16:16:52.380Z] Copying: 576/1024 [MB] (47 MBps) [2024-11-20T16:16:53.312Z] Copying: 621/1024 [MB] (44 MBps) [2024-11-20T16:16:54.245Z] Copying: 666/1024 [MB] (45 MBps) [2024-11-20T16:16:55.179Z] Copying: 711/1024 [MB] (45 MBps) [2024-11-20T16:16:56.112Z] Copying: 756/1024 [MB] (44 MBps) [2024-11-20T16:16:57.044Z] Copying: 801/1024 [MB] (45 MBps) [2024-11-20T16:16:58.415Z] Copying: 847/1024 [MB] (46 MBps) [2024-11-20T16:16:59.350Z] Copying: 895/1024 [MB] (48 MBps) [2024-11-20T16:17:00.284Z] Copying: 941/1024 [MB] (45 MBps) [2024-11-20T16:17:01.303Z] Copying: 987/1024 [MB] (46 MBps) [2024-11-20T16:17:01.869Z] Copying: 1023/1024 [MB] (35 MBps) [2024-11-20T16:17:01.869Z] Copying: 1024/1024 [MB] (average 38 MBps)[2024-11-20 16:17:01.864655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.619 [2024-11-20 16:17:01.864710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:03.619 [2024-11-20 16:17:01.864738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:03.619 [2024-11-20 16:17:01.864748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.619 [2024-11-20 16:17:01.866734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:03.879 [2024-11-20 16:17:01.872440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.879 [2024-11-20 16:17:01.872471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:03.880 [2024-11-20 16:17:01.872483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.669 ms 00:29:03.880 [2024-11-20 16:17:01.872492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.883267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.883301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:03.880 [2024-11-20 16:17:01.883311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:29:03.880 [2024-11-20 16:17:01.883319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.900783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.900812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:03.880 [2024-11-20 16:17:01.900822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.449 ms 00:29:03.880 [2024-11-20 16:17:01.900830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.906929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.906961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:03.880 [2024-11-20 16:17:01.906971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.075 ms 00:29:03.880 [2024-11-20 16:17:01.906980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.929818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.929945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:03.880 [2024-11-20 16:17:01.929962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:29:03.880 [2024-11-20 16:17:01.929970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.943271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.943303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:03.880 [2024-11-20 16:17:01.943314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.274 ms 00:29:03.880 [2024-11-20 16:17:01.943323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:01.997027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:01.997165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:03.880 [2024-11-20 16:17:01.997188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.671 ms 00:29:03.880 [2024-11-20 16:17:01.997196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:02.020004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:02.020140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:03.880 [2024-11-20 16:17:02.020155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.791 ms 00:29:03.880 [2024-11-20 16:17:02.020163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:02.042332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:02.042372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:03.880 [2024-11-20 16:17:02.042382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.128 ms 00:29:03.880 [2024-11-20 16:17:02.042390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:02.063929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:02.063960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:03.880 [2024-11-20 16:17:02.063971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.509 ms 00:29:03.880 [2024-11-20 16:17:02.063978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:02.086053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.880 [2024-11-20 16:17:02.086082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:03.880 [2024-11-20 16:17:02.086093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.008 ms 00:29:03.880 [2024-11-20 16:17:02.086100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.880 [2024-11-20 16:17:02.086129] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:03.880 [2024-11-20 16:17:02.086142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:29:03.880 [2024-11-20 16:17:02.086152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:03.880 [2024-11-20 16:17:02.086556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:03.881 [2024-11-20 16:17:02.086964] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:03.881 [2024-11-20 16:17:02.086972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6062d0f1-1317-4c50-93e4-ed9106daa74a 00:29:03.881 [2024-11-20 16:17:02.086981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:29:03.881 [2024-11-20 16:17:02.086992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:29:03.881 [2024-11-20 16:17:02.087004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:29:03.881 [2024-11-20 16:17:02.087013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:29:03.881 [2024-11-20 16:17:02.087020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:03.881 [2024-11-20 16:17:02.087027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:03.881 [2024-11-20 16:17:02.087034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:03.881 [2024-11-20 16:17:02.087041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:03.881 [2024-11-20 16:17:02.087048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:03.881 [2024-11-20 16:17:02.087054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.881 [2024-11-20 16:17:02.087062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:03.881 [2024-11-20 16:17:02.087070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:29:03.881 [2024-11-20 16:17:02.087077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.881 [2024-11-20 16:17:02.099249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.881 [2024-11-20 16:17:02.099277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:03.881 [2024-11-20 16:17:02.099286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:29:03.881 [2024-11-20 16:17:02.099294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.881 [2024-11-20 16:17:02.099638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.881 [2024-11-20 16:17:02.099654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:03.881 [2024-11-20 16:17:02.099663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:29:03.881 [2024-11-20 16:17:02.099675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.131752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.131789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:04.139 [2024-11-20 16:17:02.131800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.131808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.131870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.131879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:04.139 [2024-11-20 16:17:02.131886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.131896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.131949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.131958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:04.139 [2024-11-20 16:17:02.131966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.131973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.131988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.131996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:04.139 [2024-11-20 16:17:02.132003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.132010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.207457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.207507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:04.139 [2024-11-20 16:17:02.207519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.207527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.269430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.269478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:04.139 [2024-11-20 16:17:02.269489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.269498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.269574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.269584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:04.139 [2024-11-20 16:17:02.269592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.139 [2024-11-20 16:17:02.269600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.139 [2024-11-20 16:17:02.269633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.139 [2024-11-20 16:17:02.269641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:04.140 [2024-11-20 16:17:02.269649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.140 [2024-11-20 16:17:02.269656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.140 [2024-11-20 16:17:02.269764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.140 [2024-11-20 16:17:02.269793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:04.140 [2024-11-20 16:17:02.269801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.140 [2024-11-20 16:17:02.269809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.140 [2024-11-20 16:17:02.269837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.140 [2024-11-20 16:17:02.269846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:04.140 [2024-11-20 16:17:02.269854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.140 [2024-11-20 16:17:02.269861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.140 [2024-11-20 16:17:02.269894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.140 [2024-11-20 16:17:02.269905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:04.140 [2024-11-20 16:17:02.269912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.140 [2024-11-20 16:17:02.269919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.140 [2024-11-20 16:17:02.269957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.140 [2024-11-20 16:17:02.269966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:04.140 [2024-11-20 16:17:02.269975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.140 [2024-11-20 16:17:02.269982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.140 [2024-11-20 16:17:02.270091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.325 ms, result 0 00:29:07.417 00:29:07.417 00:29:07.417 16:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:08.797 16:17:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.056 [2024-11-20 16:17:07.087970] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:29:09.056 [2024-11-20 16:17:07.088093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81425 ] 00:29:09.056 [2024-11-20 16:17:07.245694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.315 [2024-11-20 16:17:07.329425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.315 [2024-11-20 16:17:07.546532] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:09.315 [2024-11-20 16:17:07.546588] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:09.576 [2024-11-20 16:17:07.693990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.694034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:09.576 [2024-11-20 16:17:07.694046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:09.576 [2024-11-20 16:17:07.694053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.694087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.694095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:09.576 [2024-11-20 16:17:07.694103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:09.576 [2024-11-20 16:17:07.694109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.694123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:09.576 [2024-11-20 16:17:07.694642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:09.576 [2024-11-20 16:17:07.694654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.694660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:09.576 [2024-11-20 16:17:07.694667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:29:09.576 [2024-11-20 16:17:07.694673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.695665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:09.576 [2024-11-20 16:17:07.705595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.705625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:09.576 [2024-11-20 16:17:07.705634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.932 ms 00:29:09.576 [2024-11-20 16:17:07.705641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.705690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.705698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:09.576 [2024-11-20 16:17:07.705705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:09.576 [2024-11-20 16:17:07.705711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.710496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.710634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:09.576 [2024-11-20 16:17:07.710647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:29:09.576 [2024-11-20 16:17:07.710658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.710714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.710731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:09.576 [2024-11-20 16:17:07.710738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:09.576 [2024-11-20 16:17:07.710744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.710787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.710795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:09.576 [2024-11-20 16:17:07.710801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:09.576 [2024-11-20 16:17:07.710807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.710826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:09.576 [2024-11-20 16:17:07.713675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.713800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:09.576 [2024-11-20 16:17:07.713813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:29:09.576 [2024-11-20 16:17:07.713823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.713847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.713854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:09.576 [2024-11-20 16:17:07.713860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:09.576 [2024-11-20 16:17:07.713866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.713881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:09.576 [2024-11-20 16:17:07.713896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:09.576 [2024-11-20 16:17:07.713923] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:09.576 [2024-11-20 16:17:07.713937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:09.576 [2024-11-20 16:17:07.714018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:09.576 [2024-11-20 16:17:07.714026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:09.576 [2024-11-20 16:17:07.714035] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:09.576 [2024-11-20 16:17:07.714042] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714055] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:09.576 [2024-11-20 16:17:07.714060] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:09.576 [2024-11-20 16:17:07.714065] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:09.576 [2024-11-20 16:17:07.714073] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:09.576 [2024-11-20 16:17:07.714080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.714085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:09.576 [2024-11-20 16:17:07.714091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:29:09.576 [2024-11-20 16:17:07.714097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.714161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.576 [2024-11-20 16:17:07.714167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:09.576 [2024-11-20 16:17:07.714173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:09.576 [2024-11-20 16:17:07.714179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.576 [2024-11-20 16:17:07.714257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:09.576 [2024-11-20 16:17:07.714265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:09.576 [2024-11-20 16:17:07.714271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:09.576 [2024-11-20 16:17:07.714288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:09.576 [2024-11-20 16:17:07.714305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:09.576 [2024-11-20 16:17:07.714316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:09.576 [2024-11-20 16:17:07.714321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:09.576 [2024-11-20 16:17:07.714326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:09.576 [2024-11-20 16:17:07.714331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:09.576 [2024-11-20 16:17:07.714337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:09.576 [2024-11-20 16:17:07.714348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:09.576 [2024-11-20 16:17:07.714358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:09.576 [2024-11-20 16:17:07.714373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:09.576 [2024-11-20 16:17:07.714389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:09.576 [2024-11-20 16:17:07.714405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:09.576 [2024-11-20 16:17:07.714420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.576 [2024-11-20 16:17:07.714431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:09.576 [2024-11-20 16:17:07.714436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:09.576 [2024-11-20 16:17:07.714441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:09.576 [2024-11-20 16:17:07.714446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:09.576 [2024-11-20 16:17:07.714451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:09.576 [2024-11-20 16:17:07.714456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:09.576 [2024-11-20 16:17:07.714461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:09.577 [2024-11-20 16:17:07.714466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:09.577 [2024-11-20 16:17:07.714471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.577 [2024-11-20 16:17:07.714476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:09.577 [2024-11-20 16:17:07.714480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:09.577 [2024-11-20 16:17:07.714486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.577 [2024-11-20 16:17:07.714491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:09.577 [2024-11-20 16:17:07.714497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:09.577 [2024-11-20 16:17:07.714503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:09.577 [2024-11-20 16:17:07.714510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.577 [2024-11-20 16:17:07.714516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:09.577 [2024-11-20 16:17:07.714521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:09.577 [2024-11-20 16:17:07.714526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:09.577 [2024-11-20 16:17:07.714532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:09.577 [2024-11-20 16:17:07.714537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:09.577 [2024-11-20 16:17:07.714542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:09.577 [2024-11-20 16:17:07.714549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:09.577 [2024-11-20 16:17:07.714556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:09.577 [2024-11-20 16:17:07.714568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:09.577 [2024-11-20 16:17:07.714574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:09.577 [2024-11-20 16:17:07.714580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:09.577 [2024-11-20 16:17:07.714585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:09.577 [2024-11-20 16:17:07.714590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:09.577 [2024-11-20 16:17:07.714596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:09.577 [2024-11-20 16:17:07.714601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:09.577 [2024-11-20 16:17:07.714606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:09.577 [2024-11-20 16:17:07.714612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:09.577 [2024-11-20 16:17:07.714639] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:09.577 [2024-11-20 16:17:07.714647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:09.577 [2024-11-20 16:17:07.714660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:09.577 [2024-11-20 16:17:07.714665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:09.577 [2024-11-20 16:17:07.714671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:09.577 [2024-11-20 16:17:07.714677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.714683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:09.577 [2024-11-20 16:17:07.714689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:29:09.577 [2024-11-20 16:17:07.714694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.736731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.736845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.577 [2024-11-20 16:17:07.736887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.972 ms 00:29:09.577 [2024-11-20 16:17:07.736905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.736984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.737066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:09.577 [2024-11-20 16:17:07.737085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:09.577 [2024-11-20 16:17:07.737099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.773570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.773604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.577 [2024-11-20 16:17:07.773614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.398 ms 00:29:09.577 [2024-11-20 16:17:07.773620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.773656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.773664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.577 [2024-11-20 16:17:07.773673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:09.577 [2024-11-20 16:17:07.773679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.774020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.774038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.577 [2024-11-20 16:17:07.774045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:29:09.577 [2024-11-20 16:17:07.774051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.774153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.774160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.577 [2024-11-20 16:17:07.774167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:09.577 [2024-11-20 16:17:07.774177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.785128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.785247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.577 [2024-11-20 16:17:07.785260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.935 ms 00:29:09.577 [2024-11-20 16:17:07.785270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.795088] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:09.577 [2024-11-20 16:17:07.795119] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:09.577 [2024-11-20 16:17:07.795129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.795136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:09.577 [2024-11-20 16:17:07.795143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.784 ms 00:29:09.577 [2024-11-20 16:17:07.795150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.814021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.814051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:09.577 [2024-11-20 16:17:07.814060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.838 ms 00:29:09.577 [2024-11-20 16:17:07.814067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.577 [2024-11-20 16:17:07.823245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.577 [2024-11-20 16:17:07.823279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:09.577 [2024-11-20 16:17:07.823287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.144 ms 00:29:09.577 [2024-11-20 16:17:07.823293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.832223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.832326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:09.835 [2024-11-20 16:17:07.832339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.902 ms 00:29:09.835 [2024-11-20 16:17:07.832345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.832868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.832887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:09.835 [2024-11-20 16:17:07.832894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:29:09.835 [2024-11-20 16:17:07.832902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.877869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.877909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:09.835 [2024-11-20 16:17:07.877924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.952 ms 00:29:09.835 [2024-11-20 16:17:07.877931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.885889] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:09.835 [2024-11-20 16:17:07.887879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.887904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:09.835 [2024-11-20 16:17:07.887913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.913 ms 00:29:09.835 [2024-11-20 16:17:07.887920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.887980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.887989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:09.835 [2024-11-20 16:17:07.887997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:09.835 [2024-11-20 16:17:07.888006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.889283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.889310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:09.835 [2024-11-20 16:17:07.889319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:29:09.835 [2024-11-20 16:17:07.889325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.889362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.889370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:09.835 [2024-11-20 16:17:07.889377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:09.835 [2024-11-20 16:17:07.889383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.889411] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:09.835 [2024-11-20 16:17:07.889419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.889425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:09.835 [2024-11-20 16:17:07.889431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:09.835 [2024-11-20 16:17:07.889437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.907999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.908026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:09.835 [2024-11-20 16:17:07.908035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.547 ms 00:29:09.835 [2024-11-20 16:17:07.908044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.908101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.835 [2024-11-20 16:17:07.908108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:09.835 [2024-11-20 16:17:07.908115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:09.835 [2024-11-20 16:17:07.908121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.835 [2024-11-20 16:17:07.908885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.531 ms, result 0 00:29:11.207  [2024-11-20T16:17:10.414Z] Copying: 1884/1048576 [kB] (1884 kBps) [2024-11-20T16:17:11.358Z] Copying: 10168/1048576 [kB] (8284 kBps) [2024-11-20T16:17:12.300Z] Copying: 34/1024 [MB] (24 MBps) [2024-11-20T16:17:13.238Z] Copying: 52/1024 [MB] (18 MBps) [2024-11-20T16:17:14.182Z] Copying: 78/1024 [MB] (26 MBps) [2024-11-20T16:17:15.126Z] Copying: 109/1024 [MB] (30 MBps) [2024-11-20T16:17:16.067Z] Copying: 144/1024 [MB] (35 MBps) [2024-11-20T16:17:17.450Z] Copying: 171/1024 [MB] (26 MBps) [2024-11-20T16:17:18.394Z] Copying: 196/1024 [MB] (25 MBps) [2024-11-20T16:17:19.384Z] Copying: 216/1024 [MB] (20 MBps) [2024-11-20T16:17:20.326Z] Copying: 237/1024 [MB] (21 MBps) [2024-11-20T16:17:21.269Z] Copying: 258/1024 [MB] (20 MBps) [2024-11-20T16:17:22.212Z] Copying: 275/1024 [MB] (17 MBps) [2024-11-20T16:17:23.155Z] Copying: 295/1024 [MB] (20 MBps) [2024-11-20T16:17:24.096Z] Copying: 320/1024 [MB] (24 MBps) [2024-11-20T16:17:25.483Z] Copying: 350/1024 [MB] (29 MBps) [2024-11-20T16:17:26.054Z] Copying: 374/1024 [MB] (23 MBps) [2024-11-20T16:17:27.438Z] Copying: 402/1024 [MB] (27 MBps) [2024-11-20T16:17:28.380Z] Copying: 431/1024 [MB] (29 MBps) [2024-11-20T16:17:29.321Z] Copying: 458/1024 [MB] (26 MBps) [2024-11-20T16:17:30.262Z] Copying: 474/1024 [MB] (16 MBps) [2024-11-20T16:17:31.202Z] Copying: 491/1024 [MB] (16 MBps) [2024-11-20T16:17:32.144Z] Copying: 505/1024 [MB] (14 MBps) [2024-11-20T16:17:33.088Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-20T16:17:34.474Z] Copying: 535/1024 [MB] (14 MBps) [2024-11-20T16:17:35.417Z] Copying: 549/1024 [MB] (14 MBps) [2024-11-20T16:17:36.361Z] Copying: 564/1024 [MB] (14 MBps) [2024-11-20T16:17:37.307Z] Copying: 579/1024 [MB] (14 MBps) [2024-11-20T16:17:38.252Z] Copying: 593/1024 [MB] (14 MBps) [2024-11-20T16:17:39.195Z] Copying: 608/1024 [MB] (14 MBps) [2024-11-20T16:17:40.138Z] Copying: 629/1024 [MB] (21 MBps) [2024-11-20T16:17:41.083Z] Copying: 657/1024 [MB] (27 MBps) [2024-11-20T16:17:42.470Z] Copying: 681/1024 [MB] (24 MBps) [2024-11-20T16:17:43.413Z] Copying: 696/1024 [MB] (14 MBps) [2024-11-20T16:17:44.357Z] Copying: 710/1024 [MB] (14 MBps) [2024-11-20T16:17:45.299Z] Copying: 725/1024 [MB] (14 MBps) [2024-11-20T16:17:46.241Z] Copying: 740/1024 [MB] (15 MBps) [2024-11-20T16:17:47.179Z] Copying: 756/1024 [MB] (16 MBps) [2024-11-20T16:17:48.121Z] Copying: 771/1024 [MB] (15 MBps) [2024-11-20T16:17:49.065Z] Copying: 786/1024 [MB] (15 MBps) [2024-11-20T16:17:50.452Z] Copying: 801/1024 [MB] (14 MBps) [2024-11-20T16:17:51.467Z] Copying: 816/1024 [MB] (14 MBps) [2024-11-20T16:17:52.412Z] Copying: 831/1024 [MB] (14 MBps) [2024-11-20T16:17:53.358Z] Copying: 846/1024 [MB] (15 MBps) [2024-11-20T16:17:54.301Z] Copying: 863/1024 [MB] (16 MBps) [2024-11-20T16:17:55.246Z] Copying: 877/1024 [MB] (14 MBps) [2024-11-20T16:17:56.188Z] Copying: 892/1024 [MB] (15 MBps) [2024-11-20T16:17:57.132Z] Copying: 908/1024 [MB] (16 MBps) [2024-11-20T16:17:58.074Z] Copying: 925/1024 [MB] (16 MBps) [2024-11-20T16:17:59.079Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-20T16:18:00.466Z] Copying: 958/1024 [MB] (15 MBps) [2024-11-20T16:18:01.410Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-20T16:18:02.355Z] Copying: 989/1024 [MB] (15 MBps) [2024-11-20T16:18:03.301Z] Copying: 1005/1024 [MB] (15 MBps) [2024-11-20T16:18:03.301Z] Copying: 1021/1024 [MB] (15 MBps) [2024-11-20T16:18:03.563Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 16:18:03.478446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.478512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:05.313 [2024-11-20 16:18:03.478526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:05.313 [2024-11-20 16:18:03.478534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.478555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:05.313 [2024-11-20 16:18:03.481473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.481650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:05.313 [2024-11-20 16:18:03.481665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:30:05.313 [2024-11-20 16:18:03.481675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.481915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.481925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:05.313 [2024-11-20 16:18:03.481937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:30:05.313 [2024-11-20 16:18:03.481944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.493294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.493319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:05.313 [2024-11-20 16:18:03.493328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.336 ms 00:30:05.313 [2024-11-20 16:18:03.493336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.499542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.499566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:05.313 [2024-11-20 16:18:03.499582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:30:05.313 [2024-11-20 16:18:03.499591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.524132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.524161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:05.313 [2024-11-20 16:18:03.524172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:30:05.313 [2024-11-20 16:18:03.524180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.538070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.538097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:05.313 [2024-11-20 16:18:03.538109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.858 ms 00:30:05.313 [2024-11-20 16:18:03.538118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.313 [2024-11-20 16:18:03.542662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.313 [2024-11-20 16:18:03.542760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:05.313 [2024-11-20 16:18:03.542812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:30:05.313 [2024-11-20 16:18:03.542834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.576 [2024-11-20 16:18:03.566088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.576 [2024-11-20 16:18:03.566193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:05.576 [2024-11-20 16:18:03.566242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.220 ms 00:30:05.576 [2024-11-20 16:18:03.566264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.576 [2024-11-20 16:18:03.589671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.576 [2024-11-20 16:18:03.589805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:05.576 [2024-11-20 16:18:03.589867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.369 ms 00:30:05.576 [2024-11-20 16:18:03.589889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.576 [2024-11-20 16:18:03.613171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.576 [2024-11-20 16:18:03.613291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:05.576 [2024-11-20 16:18:03.613345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.963 ms 00:30:05.576 [2024-11-20 16:18:03.613367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.576 [2024-11-20 16:18:03.636359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.576 [2024-11-20 16:18:03.636471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:05.576 [2024-11-20 16:18:03.636517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.927 ms 00:30:05.576 [2024-11-20 16:18:03.636538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.576 [2024-11-20 16:18:03.636576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:05.576 [2024-11-20 16:18:03.636604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:05.576 [2024-11-20 16:18:03.636636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:05.576 [2024-11-20 16:18:03.636666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.636967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:05.576 [2024-11-20 16:18:03.637945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.637998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:05.577 [2024-11-20 16:18:03.638412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:05.577 [2024-11-20 16:18:03.638420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6062d0f1-1317-4c50-93e4-ed9106daa74a 00:30:05.577 [2024-11-20 16:18:03.638428] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:05.577 [2024-11-20 16:18:03.638435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:30:05.577 [2024-11-20 16:18:03.638442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:30:05.577 [2024-11-20 16:18:03.638455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:30:05.577 [2024-11-20 16:18:03.638461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:05.577 [2024-11-20 16:18:03.638469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:05.577 [2024-11-20 16:18:03.638476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:05.577 [2024-11-20 16:18:03.638487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:05.577 [2024-11-20 16:18:03.638494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:05.577 [2024-11-20 16:18:03.638501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.577 [2024-11-20 16:18:03.638510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:05.577 [2024-11-20 16:18:03.638517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:30:05.577 [2024-11-20 16:18:03.638525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.577 [2024-11-20 16:18:03.650707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.577 [2024-11-20 16:18:03.650744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:05.577 [2024-11-20 16:18:03.650753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.163 ms 00:30:05.577 [2024-11-20 16:18:03.650761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.577 [2024-11-20 16:18:03.651157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.577 [2024-11-20 16:18:03.651176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:05.577 [2024-11-20 16:18:03.651184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:30:05.577 [2024-11-20 16:18:03.651192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.577 [2024-11-20 16:18:03.684206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.577 [2024-11-20 16:18:03.684244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:05.577 [2024-11-20 16:18:03.684254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.577 [2024-11-20 16:18:03.684262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.577 [2024-11-20 16:18:03.684318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.577 [2024-11-20 16:18:03.684327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:05.577 [2024-11-20 16:18:03.684335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.577 [2024-11-20 16:18:03.684342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.577 [2024-11-20 16:18:03.684427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.577 [2024-11-20 16:18:03.684437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:05.577 [2024-11-20 16:18:03.684446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.577 [2024-11-20 16:18:03.684453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.578 [2024-11-20 16:18:03.684467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.578 [2024-11-20 16:18:03.684475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:05.578 [2024-11-20 16:18:03.684482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.578 [2024-11-20 16:18:03.684489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.578 [2024-11-20 16:18:03.770733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.578 [2024-11-20 16:18:03.770778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.578 [2024-11-20 16:18:03.770795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.578 [2024-11-20 16:18:03.770807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.872858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.872915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.839 [2024-11-20 16:18:03.872931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.872943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.839 [2024-11-20 16:18:03.873045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.839 [2024-11-20 16:18:03.873141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.839 [2024-11-20 16:18:03.873314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.839 [2024-11-20 16:18:03.873394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.839 [2024-11-20 16:18:03.873478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.839 [2024-11-20 16:18:03.873564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.839 [2024-11-20 16:18:03.873576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.839 [2024-11-20 16:18:03.873588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.839 [2024-11-20 16:18:03.873772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.250 ms, result 0 00:30:06.782 00:30:06.782 00:30:06.782 16:18:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.701 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:08.701 16:18:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.962 [2024-11-20 16:18:06.956383] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:30:08.962 [2024-11-20 16:18:06.956502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82034 ] 00:30:08.962 [2024-11-20 16:18:07.108068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.962 [2024-11-20 16:18:07.210083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.223 [2024-11-20 16:18:07.468331] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:09.224 [2024-11-20 16:18:07.468393] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:09.508 [2024-11-20 16:18:07.632644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.632708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:09.508 [2024-11-20 16:18:07.632751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:09.508 [2024-11-20 16:18:07.632764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.632829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.632843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.508 [2024-11-20 16:18:07.632857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:09.508 [2024-11-20 16:18:07.632867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.632895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:09.508 [2024-11-20 16:18:07.633892] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:09.508 [2024-11-20 16:18:07.633932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.633945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.508 [2024-11-20 16:18:07.633957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:30:09.508 [2024-11-20 16:18:07.633968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.635284] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:09.508 [2024-11-20 16:18:07.656100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.656267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:09.508 [2024-11-20 16:18:07.656291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.818 ms 00:30:09.508 [2024-11-20 16:18:07.656304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.656397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.656411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:09.508 [2024-11-20 16:18:07.656424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:09.508 [2024-11-20 16:18:07.656437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.661551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.661596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.508 [2024-11-20 16:18:07.661616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:30:09.508 [2024-11-20 16:18:07.661628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.661713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.661746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.508 [2024-11-20 16:18:07.661759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:09.508 [2024-11-20 16:18:07.661774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.661824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.661836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:09.508 [2024-11-20 16:18:07.661849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:09.508 [2024-11-20 16:18:07.661860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.661892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:09.508 [2024-11-20 16:18:07.666983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.667129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.508 [2024-11-20 16:18:07.667154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:30:09.508 [2024-11-20 16:18:07.667164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.667205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.667217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:09.508 [2024-11-20 16:18:07.667229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:09.508 [2024-11-20 16:18:07.667240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.667301] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:09.508 [2024-11-20 16:18:07.667328] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:09.508 [2024-11-20 16:18:07.667373] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:09.508 [2024-11-20 16:18:07.667396] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:09.508 [2024-11-20 16:18:07.667533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:09.508 [2024-11-20 16:18:07.667548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:09.508 [2024-11-20 16:18:07.667561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:09.508 [2024-11-20 16:18:07.667575] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:09.508 [2024-11-20 16:18:07.667587] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:09.508 [2024-11-20 16:18:07.667598] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:09.508 [2024-11-20 16:18:07.667609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:09.508 [2024-11-20 16:18:07.667622] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:09.508 [2024-11-20 16:18:07.667633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:09.508 [2024-11-20 16:18:07.667644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.667654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:09.508 [2024-11-20 16:18:07.667665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:30:09.508 [2024-11-20 16:18:07.667675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.508 [2024-11-20 16:18:07.667805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.508 [2024-11-20 16:18:07.667819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:09.509 [2024-11-20 16:18:07.667829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:09.509 [2024-11-20 16:18:07.667840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.509 [2024-11-20 16:18:07.667977] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:09.509 [2024-11-20 16:18:07.667992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:09.509 [2024-11-20 16:18:07.668005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:09.509 [2024-11-20 16:18:07.668044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:09.509 [2024-11-20 16:18:07.668080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:09.509 [2024-11-20 16:18:07.668103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:09.509 [2024-11-20 16:18:07.668114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:09.509 [2024-11-20 16:18:07.668125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:09.509 [2024-11-20 16:18:07.668136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:09.509 [2024-11-20 16:18:07.668148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:09.509 [2024-11-20 16:18:07.668166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:09.509 [2024-11-20 16:18:07.668188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:09.509 [2024-11-20 16:18:07.668223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:09.509 [2024-11-20 16:18:07.668259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:09.509 [2024-11-20 16:18:07.668293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:09.509 [2024-11-20 16:18:07.668329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:09.509 [2024-11-20 16:18:07.668362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:09.509 [2024-11-20 16:18:07.668384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:09.509 [2024-11-20 16:18:07.668395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:09.509 [2024-11-20 16:18:07.668405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:09.509 [2024-11-20 16:18:07.668417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:09.509 [2024-11-20 16:18:07.668428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:09.509 [2024-11-20 16:18:07.668439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:09.509 [2024-11-20 16:18:07.668462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:09.509 [2024-11-20 16:18:07.668472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668484] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:09.509 [2024-11-20 16:18:07.668496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:09.509 [2024-11-20 16:18:07.668508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.509 [2024-11-20 16:18:07.668533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:09.509 [2024-11-20 16:18:07.668546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:09.509 [2024-11-20 16:18:07.668558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:09.509 [2024-11-20 16:18:07.668570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:09.509 [2024-11-20 16:18:07.668581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:09.509 [2024-11-20 16:18:07.668592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:09.509 [2024-11-20 16:18:07.668607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:09.509 [2024-11-20 16:18:07.668622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:09.509 [2024-11-20 16:18:07.668650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:09.509 [2024-11-20 16:18:07.668662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:09.509 [2024-11-20 16:18:07.668675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:09.509 [2024-11-20 16:18:07.668687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:09.509 [2024-11-20 16:18:07.668699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:09.509 [2024-11-20 16:18:07.668711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:09.509 [2024-11-20 16:18:07.668738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:09.509 [2024-11-20 16:18:07.668751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:09.509 [2024-11-20 16:18:07.668763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:09.509 [2024-11-20 16:18:07.668825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:09.509 [2024-11-20 16:18:07.668839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:09.509 [2024-11-20 16:18:07.668865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:09.509 [2024-11-20 16:18:07.668878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:09.509 [2024-11-20 16:18:07.668891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:09.509 [2024-11-20 16:18:07.668904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.509 [2024-11-20 16:18:07.668916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:09.509 [2024-11-20 16:18:07.668929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:30:09.509 [2024-11-20 16:18:07.668940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.509 [2024-11-20 16:18:07.708101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.509 [2024-11-20 16:18:07.708150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.509 [2024-11-20 16:18:07.708167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.094 ms 00:30:09.509 [2024-11-20 16:18:07.708182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.509 [2024-11-20 16:18:07.708297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.509 [2024-11-20 16:18:07.708312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:09.509 [2024-11-20 16:18:07.708326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:30:09.509 [2024-11-20 16:18:07.708338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.766482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.766524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.775 [2024-11-20 16:18:07.766536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.065 ms 00:30:09.775 [2024-11-20 16:18:07.766544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.766591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.766601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.775 [2024-11-20 16:18:07.766612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:09.775 [2024-11-20 16:18:07.766619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.767013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.767029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.775 [2024-11-20 16:18:07.767038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:30:09.775 [2024-11-20 16:18:07.767046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.767171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.767186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.775 [2024-11-20 16:18:07.767199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:30:09.775 [2024-11-20 16:18:07.767206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.780187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.780325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.775 [2024-11-20 16:18:07.780341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.963 ms 00:30:09.775 [2024-11-20 16:18:07.780350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.793398] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:09.775 [2024-11-20 16:18:07.793530] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:09.775 [2024-11-20 16:18:07.793590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.793611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:09.775 [2024-11-20 16:18:07.793631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.133 ms 00:30:09.775 [2024-11-20 16:18:07.793650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.827362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.827486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:09.775 [2024-11-20 16:18:07.827541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.651 ms 00:30:09.775 [2024-11-20 16:18:07.827564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.839249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.839351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:09.775 [2024-11-20 16:18:07.839400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.639 ms 00:30:09.775 [2024-11-20 16:18:07.839422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.850961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.851095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:09.775 [2024-11-20 16:18:07.851149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.262 ms 00:30:09.775 [2024-11-20 16:18:07.851172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.852081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.852198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:09.775 [2024-11-20 16:18:07.852258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:30:09.775 [2024-11-20 16:18:07.852281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.906667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.906869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:09.775 [2024-11-20 16:18:07.906950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.050 ms 00:30:09.775 [2024-11-20 16:18:07.906987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.917772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:09.775 [2024-11-20 16:18:07.920310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.920413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:09.775 [2024-11-20 16:18:07.920463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:30:09.775 [2024-11-20 16:18:07.920485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.775 [2024-11-20 16:18:07.920590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.775 [2024-11-20 16:18:07.920619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:09.776 [2024-11-20 16:18:07.920643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:09.776 [2024-11-20 16:18:07.920662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.921225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.776 [2024-11-20 16:18:07.921364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:09.776 [2024-11-20 16:18:07.921418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:30:09.776 [2024-11-20 16:18:07.921440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.921480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.776 [2024-11-20 16:18:07.921502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:09.776 [2024-11-20 16:18:07.921521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:09.776 [2024-11-20 16:18:07.921540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.921585] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:09.776 [2024-11-20 16:18:07.921608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.776 [2024-11-20 16:18:07.921628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:09.776 [2024-11-20 16:18:07.921676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:09.776 [2024-11-20 16:18:07.921736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.945395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.776 [2024-11-20 16:18:07.945500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:09.776 [2024-11-20 16:18:07.945553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.620 ms 00:30:09.776 [2024-11-20 16:18:07.945575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.945907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.776 [2024-11-20 16:18:07.945963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:09.776 [2024-11-20 16:18:07.945988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:09.776 [2024-11-20 16:18:07.946007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.776 [2024-11-20 16:18:07.946962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.893 ms, result 0 00:30:11.161  [2024-11-20T16:18:10.355Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T16:18:11.302Z] Copying: 23/1024 [MB] (12 MBps) [2024-11-20T16:18:12.245Z] Copying: 34392/1048576 [kB] (9920 kBps) [2024-11-20T16:18:13.188Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-20T16:18:14.128Z] Copying: 55076/1048576 [kB] (10236 kBps) [2024-11-20T16:18:15.509Z] Copying: 67/1024 [MB] (13 MBps) [2024-11-20T16:18:16.454Z] Copying: 78808/1048576 [kB] (9744 kBps) [2024-11-20T16:18:17.399Z] Copying: 89/1024 [MB] (12 MBps) [2024-11-20T16:18:18.342Z] Copying: 101024/1048576 [kB] (9808 kBps) [2024-11-20T16:18:19.286Z] Copying: 110808/1048576 [kB] (9784 kBps) [2024-11-20T16:18:20.229Z] Copying: 120552/1048576 [kB] (9744 kBps) [2024-11-20T16:18:21.173Z] Copying: 130460/1048576 [kB] (9908 kBps) [2024-11-20T16:18:22.562Z] Copying: 140632/1048576 [kB] (10172 kBps) [2024-11-20T16:18:23.131Z] Copying: 150564/1048576 [kB] (9932 kBps) [2024-11-20T16:18:24.517Z] Copying: 160248/1048576 [kB] (9684 kBps) [2024-11-20T16:18:25.463Z] Copying: 169816/1048576 [kB] (9568 kBps) [2024-11-20T16:18:26.408Z] Copying: 179664/1048576 [kB] (9848 kBps) [2024-11-20T16:18:27.352Z] Copying: 189792/1048576 [kB] (10128 kBps) [2024-11-20T16:18:28.297Z] Copying: 199920/1048576 [kB] (10128 kBps) [2024-11-20T16:18:29.242Z] Copying: 209920/1048576 [kB] (10000 kBps) [2024-11-20T16:18:30.186Z] Copying: 219876/1048576 [kB] (9956 kBps) [2024-11-20T16:18:31.130Z] Copying: 229528/1048576 [kB] (9652 kBps) [2024-11-20T16:18:32.517Z] Copying: 239216/1048576 [kB] (9688 kBps) [2024-11-20T16:18:33.508Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-20T16:18:34.454Z] Copying: 259512/1048576 [kB] (9952 kBps) [2024-11-20T16:18:35.394Z] Copying: 269440/1048576 [kB] (9928 kBps) [2024-11-20T16:18:36.333Z] Copying: 273/1024 [MB] (10 MBps) [2024-11-20T16:18:37.273Z] Copying: 283/1024 [MB] (10 MBps) [2024-11-20T16:18:38.217Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-20T16:18:39.162Z] Copying: 306/1024 [MB] (11 MBps) [2024-11-20T16:18:40.547Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-20T16:18:41.488Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-20T16:18:42.427Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-20T16:18:43.443Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-20T16:18:44.386Z] Copying: 369700/1048576 [kB] (9964 kBps) [2024-11-20T16:18:45.325Z] Copying: 379244/1048576 [kB] (9544 kBps) [2024-11-20T16:18:46.264Z] Copying: 388860/1048576 [kB] (9616 kBps) [2024-11-20T16:18:47.205Z] Copying: 398672/1048576 [kB] (9812 kBps) [2024-11-20T16:18:48.147Z] Copying: 408420/1048576 [kB] (9748 kBps) [2024-11-20T16:18:49.532Z] Copying: 418032/1048576 [kB] (9612 kBps) [2024-11-20T16:18:50.478Z] Copying: 428040/1048576 [kB] (10008 kBps) [2024-11-20T16:18:51.419Z] Copying: 438212/1048576 [kB] (10172 kBps) [2024-11-20T16:18:52.380Z] Copying: 448304/1048576 [kB] (10092 kBps) [2024-11-20T16:18:53.322Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-20T16:18:54.261Z] Copying: 460/1024 [MB] (12 MBps) [2024-11-20T16:18:55.199Z] Copying: 481968/1048576 [kB] (10104 kBps) [2024-11-20T16:18:56.136Z] Copying: 492164/1048576 [kB] (10196 kBps) [2024-11-20T16:18:57.514Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-20T16:18:58.456Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-20T16:18:59.404Z] Copying: 525528/1048576 [kB] (10076 kBps) [2024-11-20T16:19:00.348Z] Copying: 535680/1048576 [kB] (10152 kBps) [2024-11-20T16:19:01.360Z] Copying: 545360/1048576 [kB] (9680 kBps) [2024-11-20T16:19:02.318Z] Copying: 554916/1048576 [kB] (9556 kBps) [2024-11-20T16:19:03.261Z] Copying: 565032/1048576 [kB] (10116 kBps) [2024-11-20T16:19:04.209Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-20T16:19:05.160Z] Copying: 575/1024 [MB] (12 MBps) [2024-11-20T16:19:06.546Z] Copying: 599080/1048576 [kB] (10208 kBps) [2024-11-20T16:19:07.489Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-20T16:19:08.430Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-20T16:19:09.374Z] Copying: 629676/1048576 [kB] (9796 kBps) [2024-11-20T16:19:10.326Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-20T16:19:11.270Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-20T16:19:12.215Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-20T16:19:13.159Z] Copying: 657/1024 [MB] (10 MBps) [2024-11-20T16:19:14.548Z] Copying: 669/1024 [MB] (11 MBps) [2024-11-20T16:19:15.120Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-20T16:19:16.508Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-20T16:19:17.453Z] Copying: 702/1024 [MB] (11 MBps) [2024-11-20T16:19:18.395Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-20T16:19:19.354Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-20T16:19:20.394Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-20T16:19:21.338Z] Copying: 761372/1048576 [kB] (10180 kBps) [2024-11-20T16:19:22.281Z] Copying: 771424/1048576 [kB] (10052 kBps) [2024-11-20T16:19:23.225Z] Copying: 781580/1048576 [kB] (10156 kBps) [2024-11-20T16:19:24.170Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-20T16:19:25.557Z] Copying: 789/1024 [MB] (16 MBps) [2024-11-20T16:19:26.129Z] Copying: 799/1024 [MB] (10 MBps) [2024-11-20T16:19:27.518Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-20T16:19:28.463Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-20T16:19:29.404Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-20T16:19:30.348Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-20T16:19:31.359Z] Copying: 852/1024 [MB] (10 MBps) [2024-11-20T16:19:32.304Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-20T16:19:33.249Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-20T16:19:34.192Z] Copying: 905728/1048576 [kB] (9920 kBps) [2024-11-20T16:19:35.168Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-20T16:19:36.556Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-20T16:19:37.130Z] Copying: 937708/1048576 [kB] (10108 kBps) [2024-11-20T16:19:38.514Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-20T16:19:39.457Z] Copying: 958264/1048576 [kB] (10160 kBps) [2024-11-20T16:19:40.401Z] Copying: 968240/1048576 [kB] (9976 kBps) [2024-11-20T16:19:41.346Z] Copying: 955/1024 [MB] (10 MBps) [2024-11-20T16:19:42.311Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-20T16:19:43.251Z] Copying: 976/1024 [MB] (10 MBps) [2024-11-20T16:19:44.194Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-20T16:19:45.132Z] Copying: 999/1024 [MB] (11 MBps) [2024-11-20T16:19:46.520Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-20T16:19:46.520Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-20T16:19:46.520Z] Copying: 1024/1024 [MB] (average 10 MBps)[2024-11-20 16:19:46.515291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.270 [2024-11-20 16:19:46.515390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:48.270 [2024-11-20 16:19:46.515409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:48.270 [2024-11-20 16:19:46.515420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.270 [2024-11-20 16:19:46.515449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:48.533 [2024-11-20 16:19:46.519000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.519054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:48.533 [2024-11-20 16:19:46.519067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:31:48.533 [2024-11-20 16:19:46.519078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.519601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.519615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:48.533 [2024-11-20 16:19:46.519626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:31:48.533 [2024-11-20 16:19:46.519636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.523706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.523740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:48.533 [2024-11-20 16:19:46.523753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:31:48.533 [2024-11-20 16:19:46.523768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.530194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.530236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:48.533 [2024-11-20 16:19:46.530249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.406 ms 00:31:48.533 [2024-11-20 16:19:46.530259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.557739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.557790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:48.533 [2024-11-20 16:19:46.557803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.378 ms 00:31:48.533 [2024-11-20 16:19:46.557811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.573313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.573519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:48.533 [2024-11-20 16:19:46.573545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:31:48.533 [2024-11-20 16:19:46.573554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.578053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.578102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:48.533 [2024-11-20 16:19:46.578114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.368 ms 00:31:48.533 [2024-11-20 16:19:46.578122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.604106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.604307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:48.533 [2024-11-20 16:19:46.604329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.968 ms 00:31:48.533 [2024-11-20 16:19:46.604337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.629574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.629634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:48.533 [2024-11-20 16:19:46.629647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.146 ms 00:31:48.533 [2024-11-20 16:19:46.629655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.654449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.654494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:48.533 [2024-11-20 16:19:46.654506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.747 ms 00:31:48.533 [2024-11-20 16:19:46.654514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.679427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.533 [2024-11-20 16:19:46.679474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:48.533 [2024-11-20 16:19:46.679487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.837 ms 00:31:48.533 [2024-11-20 16:19:46.679495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.533 [2024-11-20 16:19:46.679541] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:48.533 [2024-11-20 16:19:46.679564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:48.533 [2024-11-20 16:19:46.679579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:48.533 [2024-11-20 16:19:46.679588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:48.533 [2024-11-20 16:19:46.679799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.679995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:48.534 [2024-11-20 16:19:46.680429] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:48.534 [2024-11-20 16:19:46.680437] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6062d0f1-1317-4c50-93e4-ed9106daa74a 00:31:48.534 [2024-11-20 16:19:46.680445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:48.534 [2024-11-20 16:19:46.680453] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:48.534 [2024-11-20 16:19:46.680461] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:48.534 [2024-11-20 16:19:46.680471] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:48.534 [2024-11-20 16:19:46.680478] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:48.534 [2024-11-20 16:19:46.680487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:48.534 [2024-11-20 16:19:46.680502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:48.534 [2024-11-20 16:19:46.680510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:48.534 [2024-11-20 16:19:46.680517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:48.534 [2024-11-20 16:19:46.680524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.534 [2024-11-20 16:19:46.680532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:48.534 [2024-11-20 16:19:46.680543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:31:48.534 [2024-11-20 16:19:46.680554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.534 [2024-11-20 16:19:46.694459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.534 [2024-11-20 16:19:46.694645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:48.535 [2024-11-20 16:19:46.694664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.885 ms 00:31:48.535 [2024-11-20 16:19:46.694674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.535 [2024-11-20 16:19:46.695107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.535 [2024-11-20 16:19:46.695129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:48.535 [2024-11-20 16:19:46.695141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:31:48.535 [2024-11-20 16:19:46.695149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.535 [2024-11-20 16:19:46.732000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.535 [2024-11-20 16:19:46.732051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:48.535 [2024-11-20 16:19:46.732064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.535 [2024-11-20 16:19:46.732073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.535 [2024-11-20 16:19:46.732149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.535 [2024-11-20 16:19:46.732166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:48.535 [2024-11-20 16:19:46.732176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.535 [2024-11-20 16:19:46.732185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.535 [2024-11-20 16:19:46.732283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.535 [2024-11-20 16:19:46.732302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:48.535 [2024-11-20 16:19:46.732312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.535 [2024-11-20 16:19:46.732321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.535 [2024-11-20 16:19:46.732339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.535 [2024-11-20 16:19:46.732348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:48.535 [2024-11-20 16:19:46.732359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.535 [2024-11-20 16:19:46.732367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.818148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.818214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:48.796 [2024-11-20 16:19:46.818228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.818238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.887662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.887760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:48.796 [2024-11-20 16:19:46.887774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.887782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.887848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.887858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:48.796 [2024-11-20 16:19:46.887868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.887876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.887935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.887946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:48.796 [2024-11-20 16:19:46.887980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.887994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.888099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.888110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:48.796 [2024-11-20 16:19:46.888120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.888128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.888165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.888175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:48.796 [2024-11-20 16:19:46.888184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.888193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.888241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.888250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:48.796 [2024-11-20 16:19:46.888259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.888267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.888313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.796 [2024-11-20 16:19:46.888323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:48.796 [2024-11-20 16:19:46.888332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.796 [2024-11-20 16:19:46.888344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.796 [2024-11-20 16:19:46.888477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.153 ms, result 0 00:31:49.739 00:31:49.739 00:31:49.739 16:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:51.664 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:51.925 16:19:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:51.925 16:19:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:51.925 16:19:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:51.925 16:19:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:51.925 Process with pid 80416 is not found 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80416 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80416 ']' 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80416 00:31:51.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80416) - No such process 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80416 is not found' 00:31:51.925 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:52.498 Remove shared memory files 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:52.498 ************************************ 00:31:52.498 END TEST ftl_dirty_shutdown 00:31:52.498 ************************************ 00:31:52.498 00:31:52.498 real 4m14.740s 00:31:52.498 user 4m30.970s 00:31:52.498 sys 0m24.297s 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.498 16:19:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:52.498 16:19:50 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:52.498 16:19:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.498 16:19:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.498 16:19:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:52.498 ************************************ 00:31:52.498 START TEST ftl_upgrade_shutdown 00:31:52.498 ************************************ 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:52.498 * Looking for test storage... 00:31:52.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.498 --rc genhtml_branch_coverage=1 00:31:52.498 --rc genhtml_function_coverage=1 00:31:52.498 --rc genhtml_legend=1 00:31:52.498 --rc geninfo_all_blocks=1 00:31:52.498 --rc geninfo_unexecuted_blocks=1 00:31:52.498 00:31:52.498 ' 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.498 --rc genhtml_branch_coverage=1 00:31:52.498 --rc genhtml_function_coverage=1 00:31:52.498 --rc genhtml_legend=1 00:31:52.498 --rc geninfo_all_blocks=1 00:31:52.498 --rc geninfo_unexecuted_blocks=1 00:31:52.498 00:31:52.498 ' 00:31:52.498 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.498 --rc genhtml_branch_coverage=1 00:31:52.498 --rc genhtml_function_coverage=1 00:31:52.498 --rc genhtml_legend=1 00:31:52.498 --rc geninfo_all_blocks=1 00:31:52.498 --rc geninfo_unexecuted_blocks=1 00:31:52.498 00:31:52.498 ' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.499 --rc genhtml_branch_coverage=1 00:31:52.499 --rc genhtml_function_coverage=1 00:31:52.499 --rc genhtml_legend=1 00:31:52.499 --rc geninfo_all_blocks=1 00:31:52.499 --rc geninfo_unexecuted_blocks=1 00:31:52.499 00:31:52.499 ' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83146 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83146 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83146 ']' 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.499 16:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:52.760 [2024-11-20 16:19:50.786206] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:31:52.760 [2024-11-20 16:19:50.786951] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83146 ] 00:31:52.760 [2024-11-20 16:19:50.954503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.021 [2024-11-20 16:19:51.090940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:53.594 16:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:54.166 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:54.166 { 00:31:54.166 "name": "basen1", 00:31:54.166 "aliases": [ 00:31:54.166 "80d82d47-7015-4bbb-b66b-095bfa07fa54" 00:31:54.166 ], 00:31:54.166 "product_name": "NVMe disk", 00:31:54.166 "block_size": 4096, 00:31:54.166 "num_blocks": 1310720, 00:31:54.166 "uuid": "80d82d47-7015-4bbb-b66b-095bfa07fa54", 00:31:54.166 "numa_id": -1, 00:31:54.166 "assigned_rate_limits": { 00:31:54.166 "rw_ios_per_sec": 0, 00:31:54.166 "rw_mbytes_per_sec": 0, 00:31:54.166 "r_mbytes_per_sec": 0, 00:31:54.166 "w_mbytes_per_sec": 0 00:31:54.166 }, 00:31:54.166 "claimed": true, 00:31:54.166 "claim_type": "read_many_write_one", 00:31:54.166 "zoned": false, 00:31:54.166 "supported_io_types": { 00:31:54.166 "read": true, 00:31:54.166 "write": true, 00:31:54.167 "unmap": true, 00:31:54.167 "flush": true, 00:31:54.167 "reset": true, 00:31:54.167 "nvme_admin": true, 00:31:54.167 "nvme_io": true, 00:31:54.167 "nvme_io_md": false, 00:31:54.167 "write_zeroes": true, 00:31:54.167 "zcopy": false, 00:31:54.167 "get_zone_info": false, 00:31:54.167 "zone_management": false, 00:31:54.167 "zone_append": false, 00:31:54.167 "compare": true, 00:31:54.167 "compare_and_write": false, 00:31:54.167 "abort": true, 00:31:54.167 "seek_hole": false, 00:31:54.167 "seek_data": false, 00:31:54.167 "copy": true, 00:31:54.167 "nvme_iov_md": false 00:31:54.167 }, 00:31:54.167 "driver_specific": { 00:31:54.167 "nvme": [ 00:31:54.167 { 00:31:54.167 "pci_address": "0000:00:11.0", 00:31:54.167 "trid": { 00:31:54.167 "trtype": "PCIe", 00:31:54.167 "traddr": "0000:00:11.0" 00:31:54.167 }, 00:31:54.167 "ctrlr_data": { 00:31:54.167 "cntlid": 0, 00:31:54.167 "vendor_id": "0x1b36", 00:31:54.167 "model_number": "QEMU NVMe Ctrl", 00:31:54.167 "serial_number": "12341", 00:31:54.167 "firmware_revision": "8.0.0", 00:31:54.167 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:54.167 "oacs": { 00:31:54.167 "security": 0, 00:31:54.167 "format": 1, 00:31:54.167 "firmware": 0, 00:31:54.167 "ns_manage": 1 00:31:54.167 }, 00:31:54.167 "multi_ctrlr": false, 00:31:54.167 "ana_reporting": false 00:31:54.167 }, 00:31:54.167 "vs": { 00:31:54.167 "nvme_version": "1.4" 00:31:54.167 }, 00:31:54.167 "ns_data": { 00:31:54.167 "id": 1, 00:31:54.167 "can_share": false 00:31:54.167 } 00:31:54.167 } 00:31:54.167 ], 00:31:54.167 "mp_policy": "active_passive" 00:31:54.167 } 00:31:54.167 } 00:31:54.167 ]' 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:54.167 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:54.427 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=c86ebf52-bffa-4ddf-b28f-4dc5addb70cd 00:31:54.427 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:54.427 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c86ebf52-bffa-4ddf-b28f-4dc5addb70cd 00:31:54.688 16:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:54.949 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6a4d3612-0798-47d5-9bfb-2215b0c549f2 00:31:54.949 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6a4d3612-0798-47d5-9bfb-2215b0c549f2 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=81732945-1991-4179-a555-dcb2356cdb65 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 81732945-1991-4179-a555-dcb2356cdb65 ]] 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 81732945-1991-4179-a555-dcb2356cdb65 5120 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=81732945-1991-4179-a555-dcb2356cdb65 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 81732945-1991-4179-a555-dcb2356cdb65 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=81732945-1991-4179-a555-dcb2356cdb65 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:55.211 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81732945-1991-4179-a555-dcb2356cdb65 00:31:55.476 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:55.476 { 00:31:55.476 "name": "81732945-1991-4179-a555-dcb2356cdb65", 00:31:55.476 "aliases": [ 00:31:55.476 "lvs/basen1p0" 00:31:55.476 ], 00:31:55.476 "product_name": "Logical Volume", 00:31:55.476 "block_size": 4096, 00:31:55.476 "num_blocks": 5242880, 00:31:55.476 "uuid": "81732945-1991-4179-a555-dcb2356cdb65", 00:31:55.476 "assigned_rate_limits": { 00:31:55.476 "rw_ios_per_sec": 0, 00:31:55.476 "rw_mbytes_per_sec": 0, 00:31:55.476 "r_mbytes_per_sec": 0, 00:31:55.476 "w_mbytes_per_sec": 0 00:31:55.476 }, 00:31:55.476 "claimed": false, 00:31:55.476 "zoned": false, 00:31:55.476 "supported_io_types": { 00:31:55.476 "read": true, 00:31:55.476 "write": true, 00:31:55.476 "unmap": true, 00:31:55.476 "flush": false, 00:31:55.476 "reset": true, 00:31:55.476 "nvme_admin": false, 00:31:55.476 "nvme_io": false, 00:31:55.476 "nvme_io_md": false, 00:31:55.476 "write_zeroes": true, 00:31:55.476 "zcopy": false, 00:31:55.476 "get_zone_info": false, 00:31:55.476 "zone_management": false, 00:31:55.476 "zone_append": false, 00:31:55.476 "compare": false, 00:31:55.476 "compare_and_write": false, 00:31:55.476 "abort": false, 00:31:55.476 "seek_hole": true, 00:31:55.476 "seek_data": true, 00:31:55.476 "copy": false, 00:31:55.476 "nvme_iov_md": false 00:31:55.476 }, 00:31:55.476 "driver_specific": { 00:31:55.476 "lvol": { 00:31:55.476 "lvol_store_uuid": "6a4d3612-0798-47d5-9bfb-2215b0c549f2", 00:31:55.476 "base_bdev": "basen1", 00:31:55.476 "thin_provision": true, 00:31:55.476 "num_allocated_clusters": 0, 00:31:55.476 "snapshot": false, 00:31:55.476 "clone": false, 00:31:55.476 "esnap_clone": false 00:31:55.476 } 00:31:55.476 } 00:31:55.476 } 00:31:55.476 ]' 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:55.477 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:55.738 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:55.738 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:55.738 16:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:55.998 16:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:55.998 16:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:55.998 16:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 81732945-1991-4179-a555-dcb2356cdb65 -c cachen1p0 --l2p_dram_limit 2 00:31:56.260 [2024-11-20 16:19:54.329562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.260 [2024-11-20 16:19:54.329624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:56.260 [2024-11-20 16:19:54.329642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:56.260 [2024-11-20 16:19:54.329652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.260 [2024-11-20 16:19:54.329746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.260 [2024-11-20 16:19:54.329759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:56.260 [2024-11-20 16:19:54.329770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:31:56.260 [2024-11-20 16:19:54.329778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.329804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:56.261 [2024-11-20 16:19:54.330641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:56.261 [2024-11-20 16:19:54.330675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.330683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:56.261 [2024-11-20 16:19:54.330696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:31:56.261 [2024-11-20 16:19:54.330704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.330802] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4df55a45-21ab-4afd-a800-3177b9e016fd 00:31:56.261 [2024-11-20 16:19:54.332520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.332570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:56.261 [2024-11-20 16:19:54.332582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:56.261 [2024-11-20 16:19:54.332593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.341187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.341240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:56.261 [2024-11-20 16:19:54.341251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.545 ms 00:31:56.261 [2024-11-20 16:19:54.341262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.341309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.341320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:56.261 [2024-11-20 16:19:54.341329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:56.261 [2024-11-20 16:19:54.341342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.341405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.341420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:56.261 [2024-11-20 16:19:54.341428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:56.261 [2024-11-20 16:19:54.341443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.341468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:56.261 [2024-11-20 16:19:54.345899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.345942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:56.261 [2024-11-20 16:19:54.345958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.435 ms 00:31:56.261 [2024-11-20 16:19:54.345966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.345998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.346007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:56.261 [2024-11-20 16:19:54.346017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:56.261 [2024-11-20 16:19:54.346026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.346064] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:56.261 [2024-11-20 16:19:54.346209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:56.261 [2024-11-20 16:19:54.346227] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:56.261 [2024-11-20 16:19:54.346239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:56.261 [2024-11-20 16:19:54.346253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346261] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346272] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:56.261 [2024-11-20 16:19:54.346280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:56.261 [2024-11-20 16:19:54.346292] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:56.261 [2024-11-20 16:19:54.346300] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:56.261 [2024-11-20 16:19:54.346311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.346318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:56.261 [2024-11-20 16:19:54.346328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:31:56.261 [2024-11-20 16:19:54.346336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.346420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.261 [2024-11-20 16:19:54.346429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:56.261 [2024-11-20 16:19:54.346440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:31:56.261 [2024-11-20 16:19:54.346456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.261 [2024-11-20 16:19:54.346563] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:56.261 [2024-11-20 16:19:54.346573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:56.261 [2024-11-20 16:19:54.346585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:56.261 [2024-11-20 16:19:54.346610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:56.261 [2024-11-20 16:19:54.346627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:56.261 [2024-11-20 16:19:54.346636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:56.261 [2024-11-20 16:19:54.346642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:56.261 [2024-11-20 16:19:54.346658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:56.261 [2024-11-20 16:19:54.346667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:56.261 [2024-11-20 16:19:54.346683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:56.261 [2024-11-20 16:19:54.346689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:56.261 [2024-11-20 16:19:54.346708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:56.261 [2024-11-20 16:19:54.346719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:56.261 [2024-11-20 16:19:54.346750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:56.261 [2024-11-20 16:19:54.346757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:56.261 [2024-11-20 16:19:54.346774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:56.261 [2024-11-20 16:19:54.346783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:56.261 [2024-11-20 16:19:54.346799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:56.261 [2024-11-20 16:19:54.346806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:56.261 [2024-11-20 16:19:54.346823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:56.261 [2024-11-20 16:19:54.346831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:56.261 [2024-11-20 16:19:54.346849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:56.261 [2024-11-20 16:19:54.346856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:56.261 [2024-11-20 16:19:54.346872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:56.261 [2024-11-20 16:19:54.346896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:56.261 [2024-11-20 16:19:54.346919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:56.261 [2024-11-20 16:19:54.346928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346935] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:56.261 [2024-11-20 16:19:54.346945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:56.261 [2024-11-20 16:19:54.346952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:56.261 [2024-11-20 16:19:54.346963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:56.261 [2024-11-20 16:19:54.346971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:56.261 [2024-11-20 16:19:54.346982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:56.261 [2024-11-20 16:19:54.346991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:56.261 [2024-11-20 16:19:54.347000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:56.261 [2024-11-20 16:19:54.347006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:56.261 [2024-11-20 16:19:54.347015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:56.261 [2024-11-20 16:19:54.347026] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:56.261 [2024-11-20 16:19:54.347038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:56.262 [2024-11-20 16:19:54.347058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:56.262 [2024-11-20 16:19:54.347082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:56.262 [2024-11-20 16:19:54.347092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:56.262 [2024-11-20 16:19:54.347099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:56.262 [2024-11-20 16:19:54.347108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:56.262 [2024-11-20 16:19:54.347170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:56.262 [2024-11-20 16:19:54.347182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:56.262 [2024-11-20 16:19:54.347201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:56.262 [2024-11-20 16:19:54.347208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:56.262 [2024-11-20 16:19:54.347217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:56.262 [2024-11-20 16:19:54.347225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.262 [2024-11-20 16:19:54.347235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:56.262 [2024-11-20 16:19:54.347242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.733 ms 00:31:56.262 [2024-11-20 16:19:54.347252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.262 [2024-11-20 16:19:54.347290] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:56.262 [2024-11-20 16:19:54.347310] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:00.470 [2024-11-20 16:19:58.241494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.241560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:00.470 [2024-11-20 16:19:58.241576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3894.189 ms 00:32:00.470 [2024-11-20 16:19:58.241587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.266665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.266712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:00.470 [2024-11-20 16:19:58.266740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.869 ms 00:32:00.470 [2024-11-20 16:19:58.266751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.266818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.266830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:00.470 [2024-11-20 16:19:58.266839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:00.470 [2024-11-20 16:19:58.266852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.297033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.297070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:00.470 [2024-11-20 16:19:58.297081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.149 ms 00:32:00.470 [2024-11-20 16:19:58.297091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.297118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.297130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:00.470 [2024-11-20 16:19:58.297138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:00.470 [2024-11-20 16:19:58.297147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.297497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.297523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:00.470 [2024-11-20 16:19:58.297532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:32:00.470 [2024-11-20 16:19:58.297541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.297583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.297593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:00.470 [2024-11-20 16:19:58.297603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:00.470 [2024-11-20 16:19:58.297614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.311362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.311395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:00.470 [2024-11-20 16:19:58.311405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.732 ms 00:32:00.470 [2024-11-20 16:19:58.311413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.336094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:00.470 [2024-11-20 16:19:58.337139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.337178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:00.470 [2024-11-20 16:19:58.337196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.657 ms 00:32:00.470 [2024-11-20 16:19:58.337208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.363635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.470 [2024-11-20 16:19:58.363670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:00.470 [2024-11-20 16:19:58.363683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.380 ms 00:32:00.470 [2024-11-20 16:19:58.363691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.470 [2024-11-20 16:19:58.363779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.363793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:00.471 [2024-11-20 16:19:58.363805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:32:00.471 [2024-11-20 16:19:58.363813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.386850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.386882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:00.471 [2024-11-20 16:19:58.386894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.992 ms 00:32:00.471 [2024-11-20 16:19:58.386902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.409954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.409984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:00.471 [2024-11-20 16:19:58.409996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.012 ms 00:32:00.471 [2024-11-20 16:19:58.410004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.410554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.410572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:00.471 [2024-11-20 16:19:58.410582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:32:00.471 [2024-11-20 16:19:58.410591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.482628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.482664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:00.471 [2024-11-20 16:19:58.482680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.003 ms 00:32:00.471 [2024-11-20 16:19:58.482688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.506845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.506879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:00.471 [2024-11-20 16:19:58.506898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.093 ms 00:32:00.471 [2024-11-20 16:19:58.506906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.530160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.530190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:00.471 [2024-11-20 16:19:58.530202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.231 ms 00:32:00.471 [2024-11-20 16:19:58.530210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.554499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.554540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:00.471 [2024-11-20 16:19:58.554553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.267 ms 00:32:00.471 [2024-11-20 16:19:58.554560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.554586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.554594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:00.471 [2024-11-20 16:19:58.554606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:00.471 [2024-11-20 16:19:58.554613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.554686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:00.471 [2024-11-20 16:19:58.554696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:00.471 [2024-11-20 16:19:58.554708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:00.471 [2024-11-20 16:19:58.554716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:00.471 [2024-11-20 16:19:58.556964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4226.356 ms, result 0 00:32:00.471 { 00:32:00.471 "name": "ftl", 00:32:00.471 "uuid": "4df55a45-21ab-4afd-a800-3177b9e016fd" 00:32:00.471 } 00:32:00.471 16:19:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:00.732 [2024-11-20 16:19:58.755171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.733 16:19:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:00.733 16:19:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:00.994 [2024-11-20 16:19:59.139560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:00.994 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:01.255 [2024-11-20 16:19:59.335920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:01.255 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:01.516 Fill FTL, iteration 1 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83275 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83275 /var/tmp/spdk.tgt.sock 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83275 ']' 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.516 16:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:01.516 [2024-11-20 16:19:59.751127] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:01.516 [2024-11-20 16:19:59.751244] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83275 ] 00:32:01.778 [2024-11-20 16:19:59.910168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.778 [2024-11-20 16:20:00.008507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.351 16:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.352 16:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:02.352 16:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:02.613 ftln1 00:32:02.875 16:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:02.875 16:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83275 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83275 ']' 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83275 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83275 00:32:02.875 killing process with pid 83275 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83275' 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83275 00:32:02.875 16:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83275 00:32:04.790 16:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:04.790 16:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:04.790 [2024-11-20 16:20:02.644023] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:04.790 [2024-11-20 16:20:02.644141] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83317 ] 00:32:04.790 [2024-11-20 16:20:02.804893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.790 [2024-11-20 16:20:02.905955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.199  [2024-11-20T16:20:05.387Z] Copying: 198/1024 [MB] (198 MBps) [2024-11-20T16:20:06.318Z] Copying: 395/1024 [MB] (197 MBps) [2024-11-20T16:20:07.692Z] Copying: 657/1024 [MB] (262 MBps) [2024-11-20T16:20:07.693Z] Copying: 914/1024 [MB] (257 MBps) [2024-11-20T16:20:08.258Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:32:10.008 00:32:10.265 Calculate MD5 checksum, iteration 1 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:10.265 16:20:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:10.265 [2024-11-20 16:20:08.339586] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:10.265 [2024-11-20 16:20:08.339703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83375 ] 00:32:10.265 [2024-11-20 16:20:08.500144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.526 [2024-11-20 16:20:08.599624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.909  [2024-11-20T16:20:10.745Z] Copying: 682/1024 [MB] (682 MBps) [2024-11-20T16:20:11.316Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:32:13.066 00:32:13.066 16:20:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:13.066 16:20:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:15.036 Fill FTL, iteration 2 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=57c4ded7942a100cfd52ace3a4805996 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:15.036 16:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:15.295 [2024-11-20 16:20:13.310776] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:15.295 [2024-11-20 16:20:13.310888] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83433 ] 00:32:15.295 [2024-11-20 16:20:13.470840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.556 [2024-11-20 16:20:13.575780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.942  [2024-11-20T16:20:16.138Z] Copying: 176/1024 [MB] (176 MBps) [2024-11-20T16:20:17.082Z] Copying: 362/1024 [MB] (186 MBps) [2024-11-20T16:20:18.028Z] Copying: 540/1024 [MB] (178 MBps) [2024-11-20T16:20:18.974Z] Copying: 725/1024 [MB] (185 MBps) [2024-11-20T16:20:19.917Z] Copying: 908/1024 [MB] (183 MBps) [2024-11-20T16:20:20.487Z] Copying: 1024/1024 [MB] (average 180 MBps) 00:32:22.237 00:32:22.237 Calculate MD5 checksum, iteration 2 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:22.237 16:20:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:22.237 [2024-11-20 16:20:20.452994] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:22.237 [2024-11-20 16:20:20.453119] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83508 ] 00:32:22.498 [2024-11-20 16:20:20.614527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.498 [2024-11-20 16:20:20.726820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.414  [2024-11-20T16:20:22.925Z] Copying: 685/1024 [MB] (685 MBps) [2024-11-20T16:20:24.311Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:32:26.061 00:32:26.061 16:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:26.061 16:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.980 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:27.980 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=58f99eacc11d9d2f7d46bd6c34770972 00:32:27.980 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:27.980 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:27.980 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:28.242 [2024-11-20 16:20:26.316011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.242 [2024-11-20 16:20:26.316056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:28.242 [2024-11-20 16:20:26.316071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:28.242 [2024-11-20 16:20:26.316079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.242 [2024-11-20 16:20:26.316102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.242 [2024-11-20 16:20:26.316110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:28.242 [2024-11-20 16:20:26.316121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:28.242 [2024-11-20 16:20:26.316129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.242 [2024-11-20 16:20:26.316148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.242 [2024-11-20 16:20:26.316156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:28.242 [2024-11-20 16:20:26.316164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:28.242 [2024-11-20 16:20:26.316171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.242 [2024-11-20 16:20:26.316228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:32:28.242 true 00:32:28.242 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:28.504 { 00:32:28.504 "name": "ftl", 00:32:28.504 "properties": [ 00:32:28.504 { 00:32:28.504 "name": "superblock_version", 00:32:28.504 "value": 5, 00:32:28.504 "read-only": true 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "name": "base_device", 00:32:28.504 "bands": [ 00:32:28.504 { 00:32:28.504 "id": 0, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 1, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 2, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 3, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 4, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 5, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 6, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 7, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 8, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 9, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 10, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 11, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 12, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 13, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 14, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 15, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 16, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 17, 00:32:28.504 "state": "FREE", 00:32:28.504 "validity": 0.0 00:32:28.504 } 00:32:28.504 ], 00:32:28.504 "read-only": true 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "name": "cache_device", 00:32:28.504 "type": "bdev", 00:32:28.504 "chunks": [ 00:32:28.504 { 00:32:28.504 "id": 0, 00:32:28.504 "state": "INACTIVE", 00:32:28.504 "utilization": 0.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 1, 00:32:28.504 "state": "CLOSED", 00:32:28.504 "utilization": 1.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 2, 00:32:28.504 "state": "CLOSED", 00:32:28.504 "utilization": 1.0 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 3, 00:32:28.504 "state": "OPEN", 00:32:28.504 "utilization": 0.001953125 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "id": 4, 00:32:28.504 "state": "OPEN", 00:32:28.504 "utilization": 0.0 00:32:28.504 } 00:32:28.504 ], 00:32:28.504 "read-only": true 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "name": "verbose_mode", 00:32:28.504 "value": true, 00:32:28.504 "unit": "", 00:32:28.504 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:28.504 }, 00:32:28.504 { 00:32:28.504 "name": "prep_upgrade_on_shutdown", 00:32:28.504 "value": false, 00:32:28.504 "unit": "", 00:32:28.504 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:28.504 } 00:32:28.504 ] 00:32:28.504 } 00:32:28.504 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:28.504 [2024-11-20 16:20:26.744431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.504 [2024-11-20 16:20:26.744478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:28.504 [2024-11-20 16:20:26.744489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:28.504 [2024-11-20 16:20:26.744496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.504 [2024-11-20 16:20:26.744517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.504 [2024-11-20 16:20:26.744525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:28.504 [2024-11-20 16:20:26.744533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:28.504 [2024-11-20 16:20:26.744539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.504 [2024-11-20 16:20:26.744558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.504 [2024-11-20 16:20:26.744566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:28.504 [2024-11-20 16:20:26.744573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:28.504 [2024-11-20 16:20:26.744579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.504 [2024-11-20 16:20:26.744633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.192 ms, result 0 00:32:28.764 true 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:28.764 16:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:29.024 [2024-11-20 16:20:27.168365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.024 [2024-11-20 16:20:27.168414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:29.024 [2024-11-20 16:20:27.168426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:29.024 [2024-11-20 16:20:27.168435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.024 [2024-11-20 16:20:27.168456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.024 [2024-11-20 16:20:27.168463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:29.024 [2024-11-20 16:20:27.168471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:29.024 [2024-11-20 16:20:27.168478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.024 [2024-11-20 16:20:27.168497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.024 [2024-11-20 16:20:27.168504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:29.024 [2024-11-20 16:20:27.168512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:29.024 [2024-11-20 16:20:27.168518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.024 [2024-11-20 16:20:27.168571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.195 ms, result 0 00:32:29.024 true 00:32:29.024 16:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:29.284 { 00:32:29.284 "name": "ftl", 00:32:29.284 "properties": [ 00:32:29.284 { 00:32:29.284 "name": "superblock_version", 00:32:29.284 "value": 5, 00:32:29.284 "read-only": true 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "name": "base_device", 00:32:29.284 "bands": [ 00:32:29.284 { 00:32:29.284 "id": 0, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 1, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 2, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 3, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 4, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 5, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 6, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 7, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.284 { 00:32:29.284 "id": 8, 00:32:29.284 "state": "FREE", 00:32:29.284 "validity": 0.0 00:32:29.284 }, 00:32:29.285 { 00:32:29.285 "id": 9, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 10, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 11, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 12, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 13, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 14, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 15, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 16, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 17, 00:32:29.285 "state": "FREE", 00:32:29.285 "validity": 0.0 00:32:29.285 } 00:32:29.285 ], 00:32:29.285 "read-only": true 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "name": "cache_device", 00:32:29.285 "type": "bdev", 00:32:29.285 "chunks": [ 00:32:29.285 { 00:32:29.285 "id": 0, 00:32:29.285 "state": "INACTIVE", 00:32:29.285 "utilization": 0.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 1, 00:32:29.285 "state": "CLOSED", 00:32:29.285 "utilization": 1.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 2, 00:32:29.285 "state": "CLOSED", 00:32:29.285 "utilization": 1.0 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 3, 00:32:29.285 "state": "OPEN", 00:32:29.285 "utilization": 0.001953125 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "id": 4, 00:32:29.285 "state": "OPEN", 00:32:29.285 "utilization": 0.0 00:32:29.285 } 00:32:29.285 ], 00:32:29.285 "read-only": true 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "name": "verbose_mode", 00:32:29.285 "value": true, 00:32:29.285 "unit": "", 00:32:29.285 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:29.285 }, 00:32:29.285 { 00:32:29.285 "name": "prep_upgrade_on_shutdown", 00:32:29.285 "value": true, 00:32:29.285 "unit": "", 00:32:29.285 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:29.285 } 00:32:29.285 ] 00:32:29.285 } 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83146 ]] 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83146 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83146 ']' 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83146 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83146 00:32:29.285 killing process with pid 83146 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83146' 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83146 00:32:29.285 16:20:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83146 00:32:30.227 [2024-11-20 16:20:28.121059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:30.227 [2024-11-20 16:20:28.135051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.227 [2024-11-20 16:20:28.135090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:30.227 [2024-11-20 16:20:28.135102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:30.227 [2024-11-20 16:20:28.135111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.227 [2024-11-20 16:20:28.135131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:30.227 [2024-11-20 16:20:28.137746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.227 [2024-11-20 16:20:28.137773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:30.227 [2024-11-20 16:20:28.137784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.601 ms 00:32:30.227 [2024-11-20 16:20:28.137793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.738393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.738468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:40.231 [2024-11-20 16:20:37.738485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9600.539 ms 00:32:40.231 [2024-11-20 16:20:37.738500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.740253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.740285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:40.231 [2024-11-20 16:20:37.740295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.737 ms 00:32:40.231 [2024-11-20 16:20:37.740304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.741456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.741481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:40.231 [2024-11-20 16:20:37.741493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:32:40.231 [2024-11-20 16:20:37.741506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.752514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.752552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:40.231 [2024-11-20 16:20:37.752563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.970 ms 00:32:40.231 [2024-11-20 16:20:37.752572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.760479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.760519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:40.231 [2024-11-20 16:20:37.760530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.871 ms 00:32:40.231 [2024-11-20 16:20:37.760539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.760628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.760640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:40.231 [2024-11-20 16:20:37.760657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:40.231 [2024-11-20 16:20:37.760666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.770760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.770796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:40.231 [2024-11-20 16:20:37.770806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.076 ms 00:32:40.231 [2024-11-20 16:20:37.770813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.781359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.781395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:40.231 [2024-11-20 16:20:37.781406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.509 ms 00:32:40.231 [2024-11-20 16:20:37.781413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.791501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.791535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:40.231 [2024-11-20 16:20:37.791545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.052 ms 00:32:40.231 [2024-11-20 16:20:37.791553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.801671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.801706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:40.231 [2024-11-20 16:20:37.801716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.049 ms 00:32:40.231 [2024-11-20 16:20:37.801733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.801769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:40.231 [2024-11-20 16:20:37.801783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:40.231 [2024-11-20 16:20:37.801793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:40.231 [2024-11-20 16:20:37.801811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:40.231 [2024-11-20 16:20:37.801821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:40.231 [2024-11-20 16:20:37.801938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:40.231 [2024-11-20 16:20:37.801947] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4df55a45-21ab-4afd-a800-3177b9e016fd 00:32:40.231 [2024-11-20 16:20:37.801955] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:40.231 [2024-11-20 16:20:37.801962] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:40.231 [2024-11-20 16:20:37.801970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:40.231 [2024-11-20 16:20:37.801978] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:40.231 [2024-11-20 16:20:37.801985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:40.231 [2024-11-20 16:20:37.801996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:40.231 [2024-11-20 16:20:37.802003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:40.231 [2024-11-20 16:20:37.802010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:40.231 [2024-11-20 16:20:37.802023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:40.231 [2024-11-20 16:20:37.802032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.802044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:40.231 [2024-11-20 16:20:37.802053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:32:40.231 [2024-11-20 16:20:37.802060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.815258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.815292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:40.231 [2024-11-20 16:20:37.815302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.180 ms 00:32:40.231 [2024-11-20 16:20:37.815316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.815684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.231 [2024-11-20 16:20:37.815701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:40.231 [2024-11-20 16:20:37.815711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:32:40.231 [2024-11-20 16:20:37.815718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.860050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.231 [2024-11-20 16:20:37.860092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:40.231 [2024-11-20 16:20:37.860108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.231 [2024-11-20 16:20:37.860117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.860153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.231 [2024-11-20 16:20:37.860167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:40.231 [2024-11-20 16:20:37.860176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.231 [2024-11-20 16:20:37.860195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.231 [2024-11-20 16:20:37.860277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:37.860288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:40.232 [2024-11-20 16:20:37.860297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:37.860308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:37.860326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:37.860335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:40.232 [2024-11-20 16:20:37.860343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:37.860350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:37.941846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:37.941902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:40.232 [2024-11-20 16:20:37.941921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:37.941930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.008994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:40.232 [2024-11-20 16:20:38.009069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:40.232 [2024-11-20 16:20:38.009211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:40.232 [2024-11-20 16:20:38.009290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:40.232 [2024-11-20 16:20:38.009420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:40.232 [2024-11-20 16:20:38.009484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:40.232 [2024-11-20 16:20:38.009556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:40.232 [2024-11-20 16:20:38.009627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:40.232 [2024-11-20 16:20:38.009635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:40.232 [2024-11-20 16:20:38.009643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.232 [2024-11-20 16:20:38.009803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9874.657 ms, result 0 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83728 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83728 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83728 ']' 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:41.177 16:20:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:41.177 [2024-11-20 16:20:39.249903] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:41.177 [2024-11-20 16:20:39.250067] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83728 ] 00:32:41.177 [2024-11-20 16:20:39.418308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.437 [2024-11-20 16:20:39.552263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.395 [2024-11-20 16:20:40.371670] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:42.395 [2024-11-20 16:20:40.371779] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:42.395 [2024-11-20 16:20:40.529007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.529087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:42.395 [2024-11-20 16:20:40.529103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:42.395 [2024-11-20 16:20:40.529113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.529184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.529197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:42.395 [2024-11-20 16:20:40.529206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:42.395 [2024-11-20 16:20:40.529215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.529241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:42.395 [2024-11-20 16:20:40.530006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:42.395 [2024-11-20 16:20:40.530033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.530042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:42.395 [2024-11-20 16:20:40.530052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.798 ms 00:32:42.395 [2024-11-20 16:20:40.530060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.531753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:42.395 [2024-11-20 16:20:40.546394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.546448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:42.395 [2024-11-20 16:20:40.546471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.643 ms 00:32:42.395 [2024-11-20 16:20:40.546480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.546562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.546573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:42.395 [2024-11-20 16:20:40.546582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:32:42.395 [2024-11-20 16:20:40.546590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.554912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.554956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:42.395 [2024-11-20 16:20:40.554968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.235 ms 00:32:42.395 [2024-11-20 16:20:40.554978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.555051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.555061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:42.395 [2024-11-20 16:20:40.555071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:32:42.395 [2024-11-20 16:20:40.555079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.555126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.555137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:42.395 [2024-11-20 16:20:40.555149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:42.395 [2024-11-20 16:20:40.555157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.555185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:42.395 [2024-11-20 16:20:40.559310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.559349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:42.395 [2024-11-20 16:20:40.559362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.130 ms 00:32:42.395 [2024-11-20 16:20:40.559374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.559406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.395 [2024-11-20 16:20:40.559416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:42.395 [2024-11-20 16:20:40.559427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:42.395 [2024-11-20 16:20:40.559436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.395 [2024-11-20 16:20:40.559492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:42.395 [2024-11-20 16:20:40.559519] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:42.395 [2024-11-20 16:20:40.559563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:42.395 [2024-11-20 16:20:40.559581] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:42.395 [2024-11-20 16:20:40.559691] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:42.395 [2024-11-20 16:20:40.559703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:42.395 [2024-11-20 16:20:40.559716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:42.395 [2024-11-20 16:20:40.559743] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:42.395 [2024-11-20 16:20:40.559754] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:42.396 [2024-11-20 16:20:40.559767] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:42.396 [2024-11-20 16:20:40.559776] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:42.396 [2024-11-20 16:20:40.559786] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:42.396 [2024-11-20 16:20:40.559796] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:42.396 [2024-11-20 16:20:40.559805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.396 [2024-11-20 16:20:40.559815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:42.396 [2024-11-20 16:20:40.559824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:32:42.396 [2024-11-20 16:20:40.559832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.396 [2024-11-20 16:20:40.559921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.396 [2024-11-20 16:20:40.559930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:42.396 [2024-11-20 16:20:40.559938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:32:42.396 [2024-11-20 16:20:40.559948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.396 [2024-11-20 16:20:40.560053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:42.396 [2024-11-20 16:20:40.560074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:42.396 [2024-11-20 16:20:40.560096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:42.396 [2024-11-20 16:20:40.560121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:42.396 [2024-11-20 16:20:40.560136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:42.396 [2024-11-20 16:20:40.560143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:42.396 [2024-11-20 16:20:40.560151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:42.396 [2024-11-20 16:20:40.560171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:42.396 [2024-11-20 16:20:40.560179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:42.396 [2024-11-20 16:20:40.560194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:42.396 [2024-11-20 16:20:40.560201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:42.396 [2024-11-20 16:20:40.560215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:42.396 [2024-11-20 16:20:40.560221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:42.396 [2024-11-20 16:20:40.560237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:42.396 [2024-11-20 16:20:40.560258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:42.396 [2024-11-20 16:20:40.560367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:42.396 [2024-11-20 16:20:40.560388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:42.396 [2024-11-20 16:20:40.560409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:42.396 [2024-11-20 16:20:40.560430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:42.396 [2024-11-20 16:20:40.560450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:42.396 [2024-11-20 16:20:40.560473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:42.396 [2024-11-20 16:20:40.560479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560489] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:42.396 [2024-11-20 16:20:40.560499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:42.396 [2024-11-20 16:20:40.560507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:42.396 [2024-11-20 16:20:40.560526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:42.396 [2024-11-20 16:20:40.560533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:42.396 [2024-11-20 16:20:40.560541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:42.396 [2024-11-20 16:20:40.560548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:42.396 [2024-11-20 16:20:40.560556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:42.396 [2024-11-20 16:20:40.560563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:42.396 [2024-11-20 16:20:40.560572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:42.396 [2024-11-20 16:20:40.560581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:42.396 [2024-11-20 16:20:40.560599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:42.396 [2024-11-20 16:20:40.560622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:42.396 [2024-11-20 16:20:40.560630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:42.396 [2024-11-20 16:20:40.560637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:42.396 [2024-11-20 16:20:40.560644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:42.396 [2024-11-20 16:20:40.560681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:42.397 [2024-11-20 16:20:40.560688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:42.397 [2024-11-20 16:20:40.560696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:42.397 [2024-11-20 16:20:40.560704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.397 [2024-11-20 16:20:40.560712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:42.397 [2024-11-20 16:20:40.560737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:42.397 [2024-11-20 16:20:40.560747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:42.397 [2024-11-20 16:20:40.560755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:42.397 [2024-11-20 16:20:40.560768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.397 [2024-11-20 16:20:40.560778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:42.397 [2024-11-20 16:20:40.560786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.782 ms 00:32:42.397 [2024-11-20 16:20:40.560794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.397 [2024-11-20 16:20:40.560840] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:42.397 [2024-11-20 16:20:40.560851] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:46.613 [2024-11-20 16:20:44.275647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.275709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:46.613 [2024-11-20 16:20:44.275734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3714.792 ms 00:32:46.613 [2024-11-20 16:20:44.275745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.302082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.302130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:46.613 [2024-11-20 16:20:44.302142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.119 ms 00:32:46.613 [2024-11-20 16:20:44.302149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.302216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.302230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:46.613 [2024-11-20 16:20:44.302238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:46.613 [2024-11-20 16:20:44.302246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.333859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.333897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:46.613 [2024-11-20 16:20:44.333908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.569 ms 00:32:46.613 [2024-11-20 16:20:44.333923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.333958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.333968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:46.613 [2024-11-20 16:20:44.333977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:46.613 [2024-11-20 16:20:44.333984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.334352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.334368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:46.613 [2024-11-20 16:20:44.334376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:32:46.613 [2024-11-20 16:20:44.334384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.334430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.334442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:46.613 [2024-11-20 16:20:44.334455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:46.613 [2024-11-20 16:20:44.334468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.349136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.613 [2024-11-20 16:20:44.349171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:46.613 [2024-11-20 16:20:44.349180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.645 ms 00:32:46.613 [2024-11-20 16:20:44.349188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.613 [2024-11-20 16:20:44.378921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:46.613 [2024-11-20 16:20:44.378972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:46.614 [2024-11-20 16:20:44.378998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.379012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:46.614 [2024-11-20 16:20:44.379024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.697 ms 00:32:46.614 [2024-11-20 16:20:44.379033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.393639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.393675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:46.614 [2024-11-20 16:20:44.393686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.556 ms 00:32:46.614 [2024-11-20 16:20:44.393695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.406066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.406098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:46.614 [2024-11-20 16:20:44.406108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.314 ms 00:32:46.614 [2024-11-20 16:20:44.406116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.418743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.418776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:46.614 [2024-11-20 16:20:44.418786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.591 ms 00:32:46.614 [2024-11-20 16:20:44.418793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.419425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.419454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:46.614 [2024-11-20 16:20:44.419464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:32:46.614 [2024-11-20 16:20:44.419471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.477760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.477817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:46.614 [2024-11-20 16:20:44.477831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.268 ms 00:32:46.614 [2024-11-20 16:20:44.477839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.488615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:46.614 [2024-11-20 16:20:44.489335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.489364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:46.614 [2024-11-20 16:20:44.489374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.450 ms 00:32:46.614 [2024-11-20 16:20:44.489382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.489471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.489487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:46.614 [2024-11-20 16:20:44.489496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:46.614 [2024-11-20 16:20:44.489503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.489562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.489575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:46.614 [2024-11-20 16:20:44.489584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:46.614 [2024-11-20 16:20:44.489591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.489616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.489628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:46.614 [2024-11-20 16:20:44.489639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:46.614 [2024-11-20 16:20:44.489647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.489680] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:46.614 [2024-11-20 16:20:44.489694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.489704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:46.614 [2024-11-20 16:20:44.489712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:46.614 [2024-11-20 16:20:44.489719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.514178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.514216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:46.614 [2024-11-20 16:20:44.514227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.425 ms 00:32:46.614 [2024-11-20 16:20:44.514234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.514304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.614 [2024-11-20 16:20:44.514314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:46.614 [2024-11-20 16:20:44.514322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:46.614 [2024-11-20 16:20:44.514333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.614 [2024-11-20 16:20:44.515664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3986.217 ms, result 0 00:32:46.614 [2024-11-20 16:20:44.530535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.614 [2024-11-20 16:20:44.546519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:46.614 [2024-11-20 16:20:44.554650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:47.187 16:20:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.187 16:20:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:47.187 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:47.187 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:47.187 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:47.450 [2024-11-20 16:20:45.479508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:47.450 [2024-11-20 16:20:45.479552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:47.450 [2024-11-20 16:20:45.479564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:47.450 [2024-11-20 16:20:45.479575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:47.450 [2024-11-20 16:20:45.479598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:47.450 [2024-11-20 16:20:45.479607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:47.450 [2024-11-20 16:20:45.479615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:47.450 [2024-11-20 16:20:45.479622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:47.450 [2024-11-20 16:20:45.479642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:47.450 [2024-11-20 16:20:45.479650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:47.450 [2024-11-20 16:20:45.479658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:47.450 [2024-11-20 16:20:45.479665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:47.450 [2024-11-20 16:20:45.479732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.206 ms, result 0 00:32:47.450 true 00:32:47.450 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:47.450 { 00:32:47.450 "name": "ftl", 00:32:47.450 "properties": [ 00:32:47.450 { 00:32:47.450 "name": "superblock_version", 00:32:47.450 "value": 5, 00:32:47.450 "read-only": true 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "name": "base_device", 00:32:47.450 "bands": [ 00:32:47.450 { 00:32:47.450 "id": 0, 00:32:47.450 "state": "CLOSED", 00:32:47.450 "validity": 1.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 1, 00:32:47.450 "state": "CLOSED", 00:32:47.450 "validity": 1.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 2, 00:32:47.450 "state": "CLOSED", 00:32:47.450 "validity": 0.007843137254901933 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 3, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 4, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 5, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 6, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 7, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 8, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 9, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 10, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 11, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 12, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 13, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 14, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 15, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 16, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 17, 00:32:47.450 "state": "FREE", 00:32:47.450 "validity": 0.0 00:32:47.450 } 00:32:47.450 ], 00:32:47.450 "read-only": true 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "name": "cache_device", 00:32:47.450 "type": "bdev", 00:32:47.450 "chunks": [ 00:32:47.450 { 00:32:47.450 "id": 0, 00:32:47.450 "state": "INACTIVE", 00:32:47.450 "utilization": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 1, 00:32:47.450 "state": "OPEN", 00:32:47.450 "utilization": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 2, 00:32:47.450 "state": "OPEN", 00:32:47.450 "utilization": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 3, 00:32:47.450 "state": "FREE", 00:32:47.450 "utilization": 0.0 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "id": 4, 00:32:47.450 "state": "FREE", 00:32:47.450 "utilization": 0.0 00:32:47.450 } 00:32:47.450 ], 00:32:47.450 "read-only": true 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "name": "verbose_mode", 00:32:47.450 "value": true, 00:32:47.450 "unit": "", 00:32:47.450 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:47.450 }, 00:32:47.450 { 00:32:47.450 "name": "prep_upgrade_on_shutdown", 00:32:47.450 "value": false, 00:32:47.450 "unit": "", 00:32:47.450 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:47.450 } 00:32:47.450 ] 00:32:47.450 } 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:47.712 16:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:47.973 Validate MD5 checksum, iteration 1 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:47.973 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:47.974 16:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:47.974 [2024-11-20 16:20:46.205788] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:47.974 [2024-11-20 16:20:46.206427] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83823 ] 00:32:48.236 [2024-11-20 16:20:46.369934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.236 [2024-11-20 16:20:46.468826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.160  [2024-11-20T16:20:48.672Z] Copying: 635/1024 [MB] (635 MBps) [2024-11-20T16:20:50.058Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:32:51.808 00:32:51.808 16:20:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:51.808 16:20:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=57c4ded7942a100cfd52ace3a4805996 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 57c4ded7942a100cfd52ace3a4805996 != \5\7\c\4\d\e\d\7\9\4\2\a\1\0\0\c\f\d\5\2\a\c\e\3\a\4\8\0\5\9\9\6 ]] 00:32:53.727 Validate MD5 checksum, iteration 2 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:53.727 16:20:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:53.727 [2024-11-20 16:20:51.871430] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:32:53.727 [2024-11-20 16:20:51.871546] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83886 ] 00:32:53.989 [2024-11-20 16:20:52.033155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.989 [2024-11-20 16:20:52.133777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.905  [2024-11-20T16:20:54.417Z] Copying: 630/1024 [MB] (630 MBps) [2024-11-20T16:20:58.631Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:33:00.381 00:33:00.381 16:20:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:00.381 16:20:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=58f99eacc11d9d2f7d46bd6c34770972 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 58f99eacc11d9d2f7d46bd6c34770972 != \5\8\f\9\9\e\a\c\c\1\1\d\9\d\2\f\7\d\4\6\b\d\6\c\3\4\7\7\0\9\7\2 ]] 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83728 ]] 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83728 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83975 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83975 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83975 ']' 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.299 16:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:02.299 [2024-11-20 16:21:00.356354] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:33:02.299 [2024-11-20 16:21:00.356471] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83975 ] 00:33:02.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83728 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:02.299 [2024-11-20 16:21:00.511883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.560 [2024-11-20 16:21:00.612242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.134 [2024-11-20 16:21:01.244693] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:03.134 [2024-11-20 16:21:01.244759] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:03.397 [2024-11-20 16:21:01.397652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.397689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:03.397 [2024-11-20 16:21:01.397700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:03.397 [2024-11-20 16:21:01.397707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.397765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.397774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:03.397 [2024-11-20 16:21:01.397781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:33:03.397 [2024-11-20 16:21:01.397787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.397802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:03.397 [2024-11-20 16:21:01.398369] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:03.397 [2024-11-20 16:21:01.398388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.398395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:03.397 [2024-11-20 16:21:01.398402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.590 ms 00:33:03.397 [2024-11-20 16:21:01.398407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.398631] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:03.397 [2024-11-20 16:21:01.412964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.412990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:03.397 [2024-11-20 16:21:01.413000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.333 ms 00:33:03.397 [2024-11-20 16:21:01.413007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.419958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.419980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:03.397 [2024-11-20 16:21:01.419990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:03.397 [2024-11-20 16:21:01.419996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.420247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.420257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:03.397 [2024-11-20 16:21:01.420265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:33:03.397 [2024-11-20 16:21:01.420270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.420313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.420321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:03.397 [2024-11-20 16:21:01.420328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:33:03.397 [2024-11-20 16:21:01.420334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.420354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.420361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:03.397 [2024-11-20 16:21:01.420368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:03.397 [2024-11-20 16:21:01.420374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.420390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:03.397 [2024-11-20 16:21:01.422578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.422598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:03.397 [2024-11-20 16:21:01.422606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.191 ms 00:33:03.397 [2024-11-20 16:21:01.422613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.422640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.422647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:03.397 [2024-11-20 16:21:01.422653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:03.397 [2024-11-20 16:21:01.422659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.422675] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:03.397 [2024-11-20 16:21:01.422692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:03.397 [2024-11-20 16:21:01.422730] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:03.397 [2024-11-20 16:21:01.422745] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:03.397 [2024-11-20 16:21:01.422827] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:03.397 [2024-11-20 16:21:01.422836] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:03.397 [2024-11-20 16:21:01.422844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:03.397 [2024-11-20 16:21:01.422852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:03.397 [2024-11-20 16:21:01.422860] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:03.397 [2024-11-20 16:21:01.422866] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:03.397 [2024-11-20 16:21:01.422871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:03.397 [2024-11-20 16:21:01.422877] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:03.397 [2024-11-20 16:21:01.422882] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:03.397 [2024-11-20 16:21:01.422889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.422896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:03.397 [2024-11-20 16:21:01.422903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:33:03.397 [2024-11-20 16:21:01.422908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.422974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.397 [2024-11-20 16:21:01.422986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:03.397 [2024-11-20 16:21:01.422992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:03.397 [2024-11-20 16:21:01.422997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.397 [2024-11-20 16:21:01.423074] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:03.397 [2024-11-20 16:21:01.423082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:03.397 [2024-11-20 16:21:01.423091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:03.397 [2024-11-20 16:21:01.423097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.397 [2024-11-20 16:21:01.423103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:03.397 [2024-11-20 16:21:01.423109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:03.397 [2024-11-20 16:21:01.423120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:03.397 [2024-11-20 16:21:01.423126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:03.397 [2024-11-20 16:21:01.423132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:03.397 [2024-11-20 16:21:01.423137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.397 [2024-11-20 16:21:01.423142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:03.398 [2024-11-20 16:21:01.423147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:03.398 [2024-11-20 16:21:01.423152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:03.398 [2024-11-20 16:21:01.423163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:03.398 [2024-11-20 16:21:01.423169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:03.398 [2024-11-20 16:21:01.423179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:03.398 [2024-11-20 16:21:01.423184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:03.398 [2024-11-20 16:21:01.423194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:03.398 [2024-11-20 16:21:01.423214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:03.398 [2024-11-20 16:21:01.423229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:03.398 [2024-11-20 16:21:01.423244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:03.398 [2024-11-20 16:21:01.423258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:03.398 [2024-11-20 16:21:01.423275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:03.398 [2024-11-20 16:21:01.423292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:03.398 [2024-11-20 16:21:01.423308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:03.398 [2024-11-20 16:21:01.423313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423319] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:03.398 [2024-11-20 16:21:01.423326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:03.398 [2024-11-20 16:21:01.423332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:03.398 [2024-11-20 16:21:01.423343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:03.398 [2024-11-20 16:21:01.423349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:03.398 [2024-11-20 16:21:01.423354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:03.398 [2024-11-20 16:21:01.423359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:03.398 [2024-11-20 16:21:01.423365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:03.398 [2024-11-20 16:21:01.423371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:03.398 [2024-11-20 16:21:01.423377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:03.398 [2024-11-20 16:21:01.423384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:03.398 [2024-11-20 16:21:01.423396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:03.398 [2024-11-20 16:21:01.423413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:03.398 [2024-11-20 16:21:01.423419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:03.398 [2024-11-20 16:21:01.423424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:03.398 [2024-11-20 16:21:01.423429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:03.398 [2024-11-20 16:21:01.423468] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:03.398 [2024-11-20 16:21:01.423476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:03.398 [2024-11-20 16:21:01.423490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:03.398 [2024-11-20 16:21:01.423496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:03.398 [2024-11-20 16:21:01.423502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:03.398 [2024-11-20 16:21:01.423508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.423514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:03.398 [2024-11-20 16:21:01.423520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:33:03.398 [2024-11-20 16:21:01.423525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.445155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.445180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:03.398 [2024-11-20 16:21:01.445188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.590 ms 00:33:03.398 [2024-11-20 16:21:01.445195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.445227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.445234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:03.398 [2024-11-20 16:21:01.445240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:03.398 [2024-11-20 16:21:01.445246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.471824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.471850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:03.398 [2024-11-20 16:21:01.471858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.536 ms 00:33:03.398 [2024-11-20 16:21:01.471864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.471887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.471893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:03.398 [2024-11-20 16:21:01.471900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:03.398 [2024-11-20 16:21:01.471906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.471983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.471992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:03.398 [2024-11-20 16:21:01.471998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:03.398 [2024-11-20 16:21:01.472004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.472037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.472044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:03.398 [2024-11-20 16:21:01.472050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:03.398 [2024-11-20 16:21:01.472057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.485435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.485459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:03.398 [2024-11-20 16:21:01.485468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.361 ms 00:33:03.398 [2024-11-20 16:21:01.485475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.485555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.485564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:03.398 [2024-11-20 16:21:01.485571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:03.398 [2024-11-20 16:21:01.485577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.520545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.520576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:03.398 [2024-11-20 16:21:01.520588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.952 ms 00:33:03.398 [2024-11-20 16:21:01.520595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.528059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.528085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:03.398 [2024-11-20 16:21:01.528100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:33:03.398 [2024-11-20 16:21:01.528106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.398 [2024-11-20 16:21:01.574248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.398 [2024-11-20 16:21:01.574285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:03.398 [2024-11-20 16:21:01.574300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.100 ms 00:33:03.399 [2024-11-20 16:21:01.574307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.399 [2024-11-20 16:21:01.574435] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:03.399 [2024-11-20 16:21:01.574538] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:03.399 [2024-11-20 16:21:01.574639] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:03.399 [2024-11-20 16:21:01.574745] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:03.399 [2024-11-20 16:21:01.574760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.399 [2024-11-20 16:21:01.574767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:03.399 [2024-11-20 16:21:01.574775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:33:03.399 [2024-11-20 16:21:01.574782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.399 [2024-11-20 16:21:01.574831] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:03.399 [2024-11-20 16:21:01.574840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.399 [2024-11-20 16:21:01.574850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:03.399 [2024-11-20 16:21:01.574858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:03.399 [2024-11-20 16:21:01.574865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.399 [2024-11-20 16:21:01.586256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.399 [2024-11-20 16:21:01.586286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:03.399 [2024-11-20 16:21:01.586295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.374 ms 00:33:03.399 [2024-11-20 16:21:01.586302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.399 [2024-11-20 16:21:01.592632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.399 [2024-11-20 16:21:01.592657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:03.399 [2024-11-20 16:21:01.592666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:03.399 [2024-11-20 16:21:01.592672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.399 [2024-11-20 16:21:01.592754] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:03.399 [2024-11-20 16:21:01.592915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.399 [2024-11-20 16:21:01.592925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:03.399 [2024-11-20 16:21:01.592932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:33:03.399 [2024-11-20 16:21:01.592939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.343 [2024-11-20 16:21:02.521078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.343 [2024-11-20 16:21:02.521123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:04.343 [2024-11-20 16:21:02.521135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 927.528 ms 00:33:04.343 [2024-11-20 16:21:02.521144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.605 [2024-11-20 16:21:02.595541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.605 [2024-11-20 16:21:02.595580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:04.605 [2024-11-20 16:21:02.595597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.492 ms 00:33:04.605 [2024-11-20 16:21:02.595606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.605 [2024-11-20 16:21:02.596526] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:04.605 [2024-11-20 16:21:02.596558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.605 [2024-11-20 16:21:02.596566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:04.605 [2024-11-20 16:21:02.596576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.918 ms 00:33:04.605 [2024-11-20 16:21:02.596583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.605 [2024-11-20 16:21:02.596615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.605 [2024-11-20 16:21:02.596624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:04.605 [2024-11-20 16:21:02.596632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:04.605 [2024-11-20 16:21:02.596639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.605 [2024-11-20 16:21:02.596676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 1003.921 ms, result 0 00:33:04.605 [2024-11-20 16:21:02.596712] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:04.605 [2024-11-20 16:21:02.596896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.605 [2024-11-20 16:21:02.596908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:04.605 [2024-11-20 16:21:02.596917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:33:04.605 [2024-11-20 16:21:02.596924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.624827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.624900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:05.548 [2024-11-20 16:21:03.624917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1026.986 ms 00:33:05.548 [2024-11-20 16:21:03.624927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.629567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.629605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:05.548 [2024-11-20 16:21:03.629615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.579 ms 00:33:05.548 [2024-11-20 16:21:03.629623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.630506] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:05.548 [2024-11-20 16:21:03.630542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.630550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:05.548 [2024-11-20 16:21:03.630559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:33:05.548 [2024-11-20 16:21:03.630567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.630599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.630608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:05.548 [2024-11-20 16:21:03.630617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:05.548 [2024-11-20 16:21:03.630624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.630660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 1033.939 ms, result 0 00:33:05.548 [2024-11-20 16:21:03.630706] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:05.548 [2024-11-20 16:21:03.630719] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:05.548 [2024-11-20 16:21:03.630757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.630766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:05.548 [2024-11-20 16:21:03.630775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2038.030 ms 00:33:05.548 [2024-11-20 16:21:03.630783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.630813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.630823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:05.548 [2024-11-20 16:21:03.630837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:05.548 [2024-11-20 16:21:03.630845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.642617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:05.548 [2024-11-20 16:21:03.642740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.642751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:05.548 [2024-11-20 16:21:03.642761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.879 ms 00:33:05.548 [2024-11-20 16:21:03.642769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.643466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.643491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:05.548 [2024-11-20 16:21:03.643503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.631 ms 00:33:05.548 [2024-11-20 16:21:03.643511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.645787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.645812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:05.548 [2024-11-20 16:21:03.645822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.261 ms 00:33:05.548 [2024-11-20 16:21:03.645830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.645868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.645877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:05.548 [2024-11-20 16:21:03.645884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:05.548 [2024-11-20 16:21:03.645896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.646004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.646014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:05.548 [2024-11-20 16:21:03.646023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:05.548 [2024-11-20 16:21:03.646030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.646052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.646061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:05.548 [2024-11-20 16:21:03.646069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:05.548 [2024-11-20 16:21:03.646077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.646110] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:05.548 [2024-11-20 16:21:03.646120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.646127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:05.548 [2024-11-20 16:21:03.646135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:05.548 [2024-11-20 16:21:03.646143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.646199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.548 [2024-11-20 16:21:03.646210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:05.548 [2024-11-20 16:21:03.646220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:05.548 [2024-11-20 16:21:03.646229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.548 [2024-11-20 16:21:03.647365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2249.206 ms, result 0 00:33:05.548 [2024-11-20 16:21:03.663027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.548 [2024-11-20 16:21:03.679029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:05.548 [2024-11-20 16:21:03.687175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:05.548 Validate MD5 checksum, iteration 1 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:05.548 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:05.549 16:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:05.549 [2024-11-20 16:21:03.783178] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:33:05.549 [2024-11-20 16:21:03.783304] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84017 ] 00:33:05.810 [2024-11-20 16:21:03.941408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.810 [2024-11-20 16:21:04.040083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.725  [2024-11-20T16:21:06.547Z] Copying: 580/1024 [MB] (580 MBps) [2024-11-20T16:21:09.092Z] Copying: 1024/1024 [MB] (average 552 MBps) 00:33:10.842 00:33:10.842 16:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:10.842 16:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:12.760 Validate MD5 checksum, iteration 2 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=57c4ded7942a100cfd52ace3a4805996 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 57c4ded7942a100cfd52ace3a4805996 != \5\7\c\4\d\e\d\7\9\4\2\a\1\0\0\c\f\d\5\2\a\c\e\3\a\4\8\0\5\9\9\6 ]] 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:12.760 16:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:12.760 [2024-11-20 16:21:10.938325] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:33:12.760 [2024-11-20 16:21:10.938447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84095 ] 00:33:13.022 [2024-11-20 16:21:11.094451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.022 [2024-11-20 16:21:11.191911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.938  [2024-11-20T16:21:13.760Z] Copying: 570/1024 [MB] (570 MBps) [2024-11-20T16:21:16.311Z] Copying: 1024/1024 [MB] (average 575 MBps) 00:33:18.061 00:33:18.061 16:21:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:18.061 16:21:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=58f99eacc11d9d2f7d46bd6c34770972 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 58f99eacc11d9d2f7d46bd6c34770972 != \5\8\f\9\9\e\a\c\c\1\1\d\9\d\2\f\7\d\4\6\b\d\6\c\3\4\7\7\0\9\7\2 ]] 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83975 ]] 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83975 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83975 ']' 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83975 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83975 00:33:19.979 killing process with pid 83975 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83975' 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83975 00:33:19.979 16:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83975 00:33:20.927 [2024-11-20 16:21:18.856049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:20.927 [2024-11-20 16:21:18.872150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.872205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:20.927 [2024-11-20 16:21:18.872221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:20.927 [2024-11-20 16:21:18.872230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.872254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:20.927 [2024-11-20 16:21:18.875197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.875232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:20.927 [2024-11-20 16:21:18.875250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.928 ms 00:33:20.927 [2024-11-20 16:21:18.875259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.875484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.875495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:20.927 [2024-11-20 16:21:18.875506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:33:20.927 [2024-11-20 16:21:18.875515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.877396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.877432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:20.927 [2024-11-20 16:21:18.877442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.863 ms 00:33:20.927 [2024-11-20 16:21:18.877451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.878601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.878628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:20.927 [2024-11-20 16:21:18.878639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:33:20.927 [2024-11-20 16:21:18.878648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.889522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.889569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:20.927 [2024-11-20 16:21:18.889582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.820 ms 00:33:20.927 [2024-11-20 16:21:18.889597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.895489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.895539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:20.927 [2024-11-20 16:21:18.895552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.846 ms 00:33:20.927 [2024-11-20 16:21:18.895562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.895666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.895677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:20.927 [2024-11-20 16:21:18.895687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:33:20.927 [2024-11-20 16:21:18.895697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.906118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.906162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:20.927 [2024-11-20 16:21:18.906173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.396 ms 00:33:20.927 [2024-11-20 16:21:18.906182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.916840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.916887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:20.927 [2024-11-20 16:21:18.916900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.615 ms 00:33:20.927 [2024-11-20 16:21:18.916908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.927 [2024-11-20 16:21:18.927141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.927 [2024-11-20 16:21:18.927186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:20.928 [2024-11-20 16:21:18.927198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.190 ms 00:33:20.928 [2024-11-20 16:21:18.927207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.937699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.928 [2024-11-20 16:21:18.937750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:20.928 [2024-11-20 16:21:18.937762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.418 ms 00:33:20.928 [2024-11-20 16:21:18.937770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.937813] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:20.928 [2024-11-20 16:21:18.937831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:20.928 [2024-11-20 16:21:18.937844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:20.928 [2024-11-20 16:21:18.937854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:20.928 [2024-11-20 16:21:18.937864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.937995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:20.928 [2024-11-20 16:21:18.938007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:20.928 [2024-11-20 16:21:18.938016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4df55a45-21ab-4afd-a800-3177b9e016fd 00:33:20.928 [2024-11-20 16:21:18.938025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:20.928 [2024-11-20 16:21:18.938036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:20.928 [2024-11-20 16:21:18.938044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:20.928 [2024-11-20 16:21:18.938053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:20.928 [2024-11-20 16:21:18.938061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:20.928 [2024-11-20 16:21:18.938070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:20.928 [2024-11-20 16:21:18.938079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:20.928 [2024-11-20 16:21:18.938087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:20.928 [2024-11-20 16:21:18.938098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:20.928 [2024-11-20 16:21:18.938107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.928 [2024-11-20 16:21:18.938123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:20.928 [2024-11-20 16:21:18.938134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:33:20.928 [2024-11-20 16:21:18.938143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.951949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.928 [2024-11-20 16:21:18.951993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:20.928 [2024-11-20 16:21:18.952006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.785 ms 00:33:20.928 [2024-11-20 16:21:18.952023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.952428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.928 [2024-11-20 16:21:18.952440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:20.928 [2024-11-20 16:21:18.952449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.381 ms 00:33:20.928 [2024-11-20 16:21:18.952456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.999060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:18.999113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:20.928 [2024-11-20 16:21:18.999128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:18.999138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.999189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:18.999199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:20.928 [2024-11-20 16:21:18.999209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:18.999218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.999333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:18.999346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:20.928 [2024-11-20 16:21:18.999356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:18.999365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:18.999385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:18.999397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:20.928 [2024-11-20 16:21:18.999407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:18.999416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.085262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.085323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:20.928 [2024-11-20 16:21:19.085337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.085346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.155456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.155520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:20.928 [2024-11-20 16:21:19.155535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.155544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.155650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.155660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:20.928 [2024-11-20 16:21:19.155670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.155678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.155764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.155776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:20.928 [2024-11-20 16:21:19.155791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.155808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.155914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.155934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:20.928 [2024-11-20 16:21:19.155943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.155951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.155988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.155997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:20.928 [2024-11-20 16:21:19.156006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.156018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.156062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.156082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:20.928 [2024-11-20 16:21:19.156091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.156099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.156149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:20.928 [2024-11-20 16:21:19.156204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:20.928 [2024-11-20 16:21:19.156215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:20.928 [2024-11-20 16:21:19.156223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.928 [2024-11-20 16:21:19.156357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 284.170 ms, result 0 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:21.871 Remove shared memory files 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:21.871 16:21:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83728 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:21.871 00:33:21.871 real 1m29.494s 00:33:21.871 user 2m4.126s 00:33:21.871 sys 0m19.230s 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.871 ************************************ 00:33:21.871 END TEST ftl_upgrade_shutdown 00:33:21.871 ************************************ 00:33:21.871 16:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:21.871 16:21:20 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:21.871 16:21:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:21.871 16:21:20 ftl -- ftl/ftl.sh@14 -- # killprocess 75004 00:33:21.871 16:21:20 ftl -- common/autotest_common.sh@954 -- # '[' -z 75004 ']' 00:33:21.871 16:21:20 ftl -- common/autotest_common.sh@958 -- # kill -0 75004 00:33:21.871 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75004) - No such process 00:33:21.871 Process with pid 75004 is not found 00:33:21.871 16:21:20 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75004 is not found' 00:33:21.871 16:21:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:21.872 16:21:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84221 00:33:21.872 16:21:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84221 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 84221 ']' 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.872 16:21:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.872 16:21:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:22.133 [2024-11-20 16:21:20.138165] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:33:22.133 [2024-11-20 16:21:20.138298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84221 ] 00:33:22.133 [2024-11-20 16:21:20.297150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.394 [2024-11-20 16:21:20.397538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.968 16:21:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.968 16:21:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:22.968 16:21:21 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:23.228 nvme0n1 00:33:23.228 16:21:21 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:23.228 16:21:21 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:23.228 16:21:21 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:23.489 16:21:21 ftl -- ftl/common.sh@28 -- # stores=6a4d3612-0798-47d5-9bfb-2215b0c549f2 00:33:23.489 16:21:21 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:23.489 16:21:21 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a4d3612-0798-47d5-9bfb-2215b0c549f2 00:33:23.489 16:21:21 ftl -- ftl/ftl.sh@23 -- # killprocess 84221 00:33:23.489 16:21:21 ftl -- common/autotest_common.sh@954 -- # '[' -z 84221 ']' 00:33:23.489 16:21:21 ftl -- common/autotest_common.sh@958 -- # kill -0 84221 00:33:23.489 16:21:21 ftl -- common/autotest_common.sh@959 -- # uname 00:33:23.489 16:21:21 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.489 16:21:21 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84221 00:33:23.751 16:21:21 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.751 killing process with pid 84221 00:33:23.751 16:21:21 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.751 16:21:21 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84221' 00:33:23.751 16:21:21 ftl -- common/autotest_common.sh@973 -- # kill 84221 00:33:23.751 16:21:21 ftl -- common/autotest_common.sh@978 -- # wait 84221 00:33:25.137 16:21:23 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:25.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:25.398 Waiting for block devices as requested 00:33:25.398 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:25.398 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:25.659 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:25.659 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:30.946 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:30.946 16:21:28 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:30.946 Remove shared memory files 00:33:30.946 16:21:28 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:30.946 16:21:28 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:30.946 16:21:28 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:30.946 16:21:28 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:30.946 16:21:28 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:30.946 16:21:28 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:30.946 00:33:30.946 real 13m46.270s 00:33:30.946 user 16m13.576s 00:33:30.946 sys 1m16.392s 00:33:30.946 16:21:28 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.946 16:21:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:30.946 ************************************ 00:33:30.946 END TEST ftl 00:33:30.946 ************************************ 00:33:30.946 16:21:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:30.946 16:21:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:30.946 16:21:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:30.946 16:21:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:30.946 16:21:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:30.946 16:21:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:30.946 16:21:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:30.946 16:21:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:30.946 16:21:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:30.946 16:21:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:30.946 16:21:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.946 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:33:30.946 16:21:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:30.946 16:21:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:30.946 16:21:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:30.946 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:33:32.328 INFO: APP EXITING 00:33:32.328 INFO: killing all VMs 00:33:32.328 INFO: killing vhost app 00:33:32.328 INFO: EXIT DONE 00:33:32.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:33.161 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:33.161 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:33.161 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:33.161 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:33.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:33.995 Cleaning 00:33:33.995 Removing: /var/run/dpdk/spdk0/config 00:33:33.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:33.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:33.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:33.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:33.995 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:33.995 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:33.995 Removing: /var/run/dpdk/spdk0 00:33:33.995 Removing: /var/run/dpdk/spdk_pid56963 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57165 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57377 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57471 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57515 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57638 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57656 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57849 00:33:33.995 Removing: /var/run/dpdk/spdk_pid57948 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58044 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58144 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58241 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58286 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58317 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58393 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58477 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58913 00:33:33.995 Removing: /var/run/dpdk/spdk_pid58966 00:33:33.995 Removing: /var/run/dpdk/spdk_pid59018 00:33:33.995 Removing: /var/run/dpdk/spdk_pid59034 00:33:33.995 Removing: /var/run/dpdk/spdk_pid59125 00:33:33.995 Removing: /var/run/dpdk/spdk_pid59141 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59238 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59248 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59307 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59325 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59383 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59400 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59561 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59598 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59681 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59859 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59943 00:33:33.996 Removing: /var/run/dpdk/spdk_pid59979 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60407 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60505 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60615 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60670 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60698 00:33:33.996 Removing: /var/run/dpdk/spdk_pid60776 00:33:33.996 Removing: /var/run/dpdk/spdk_pid61403 00:33:33.996 Removing: /var/run/dpdk/spdk_pid61440 00:33:33.996 Removing: /var/run/dpdk/spdk_pid61928 00:33:33.996 Removing: /var/run/dpdk/spdk_pid62021 00:33:33.996 Removing: /var/run/dpdk/spdk_pid62136 00:33:33.996 Removing: /var/run/dpdk/spdk_pid62189 00:33:33.996 Removing: /var/run/dpdk/spdk_pid62209 00:33:33.996 Removing: /var/run/dpdk/spdk_pid62240 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64078 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64215 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64219 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64237 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64283 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64287 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64299 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64344 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64348 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64360 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64405 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64409 00:33:33.996 Removing: /var/run/dpdk/spdk_pid64421 00:33:33.996 Removing: /var/run/dpdk/spdk_pid65802 00:33:33.996 Removing: /var/run/dpdk/spdk_pid65899 00:33:33.996 Removing: /var/run/dpdk/spdk_pid67300 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69035 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69104 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69179 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69283 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69375 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69476 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69539 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69614 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69724 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69810 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69906 00:33:33.996 Removing: /var/run/dpdk/spdk_pid69980 00:33:33.996 Removing: /var/run/dpdk/spdk_pid70055 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70159 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70251 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70341 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70415 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70486 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70592 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70684 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70784 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70852 00:33:34.258 Removing: /var/run/dpdk/spdk_pid70932 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71005 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71076 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71179 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71275 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71370 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71433 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71513 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71587 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71661 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71759 00:33:34.258 Removing: /var/run/dpdk/spdk_pid71855 00:33:34.258 Removing: /var/run/dpdk/spdk_pid72000 00:33:34.258 Removing: /var/run/dpdk/spdk_pid72273 00:33:34.258 Removing: /var/run/dpdk/spdk_pid72314 00:33:34.258 Removing: /var/run/dpdk/spdk_pid72747 00:33:34.258 Removing: /var/run/dpdk/spdk_pid72939 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73042 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73161 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73210 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73236 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73548 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73603 00:33:34.258 Removing: /var/run/dpdk/spdk_pid73674 00:33:34.258 Removing: /var/run/dpdk/spdk_pid74063 00:33:34.258 Removing: /var/run/dpdk/spdk_pid74203 00:33:34.258 Removing: /var/run/dpdk/spdk_pid75004 00:33:34.258 Removing: /var/run/dpdk/spdk_pid75137 00:33:34.258 Removing: /var/run/dpdk/spdk_pid75312 00:33:34.258 Removing: /var/run/dpdk/spdk_pid75420 00:33:34.258 Removing: /var/run/dpdk/spdk_pid75795 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76087 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76439 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76610 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76697 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76750 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76842 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76864 00:33:34.258 Removing: /var/run/dpdk/spdk_pid76911 00:33:34.258 Removing: /var/run/dpdk/spdk_pid77104 00:33:34.258 Removing: /var/run/dpdk/spdk_pid77336 00:33:34.258 Removing: /var/run/dpdk/spdk_pid78419 00:33:34.258 Removing: /var/run/dpdk/spdk_pid79348 00:33:34.258 Removing: /var/run/dpdk/spdk_pid80037 00:33:34.258 Removing: /var/run/dpdk/spdk_pid80416 00:33:34.258 Removing: /var/run/dpdk/spdk_pid80547 00:33:34.258 Removing: /var/run/dpdk/spdk_pid80635 00:33:34.258 Removing: /var/run/dpdk/spdk_pid81022 00:33:34.258 Removing: /var/run/dpdk/spdk_pid81092 00:33:34.258 Removing: /var/run/dpdk/spdk_pid81425 00:33:34.258 Removing: /var/run/dpdk/spdk_pid82034 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83146 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83275 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83317 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83375 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83433 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83508 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83728 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83823 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83886 00:33:34.258 Removing: /var/run/dpdk/spdk_pid83975 00:33:34.258 Removing: /var/run/dpdk/spdk_pid84017 00:33:34.258 Removing: /var/run/dpdk/spdk_pid84095 00:33:34.258 Removing: /var/run/dpdk/spdk_pid84221 00:33:34.258 Clean 00:33:34.258 16:21:32 -- common/autotest_common.sh@1453 -- # return 0 00:33:34.258 16:21:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:34.258 16:21:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.258 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:33:34.519 16:21:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:34.519 16:21:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.519 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:33:34.519 16:21:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:34.519 16:21:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:34.519 16:21:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:34.519 16:21:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:34.519 16:21:32 -- spdk/autotest.sh@398 -- # hostname 00:33:34.519 16:21:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:34.519 geninfo: WARNING: invalid characters removed from testname! 00:34:01.177 16:21:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.477 16:22:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.028 16:22:04 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.610 16:22:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:12.158 16:22:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:15.477 16:22:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:18.027 16:22:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:18.027 16:22:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:18.027 16:22:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:18.027 16:22:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:18.027 16:22:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:18.027 16:22:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:18.027 + [[ -n 5028 ]] 00:34:18.027 + sudo kill 5028 00:34:18.038 [Pipeline] } 00:34:18.053 [Pipeline] // timeout 00:34:18.059 [Pipeline] } 00:34:18.073 [Pipeline] // stage 00:34:18.078 [Pipeline] } 00:34:18.091 [Pipeline] // catchError 00:34:18.101 [Pipeline] stage 00:34:18.103 [Pipeline] { (Stop VM) 00:34:18.115 [Pipeline] sh 00:34:18.400 + vagrant halt 00:34:21.703 ==> default: Halting domain... 00:34:27.038 [Pipeline] sh 00:34:27.320 + vagrant destroy -f 00:34:30.620 ==> default: Removing domain... 00:34:31.198 [Pipeline] sh 00:34:31.475 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:34:31.484 [Pipeline] } 00:34:31.496 [Pipeline] // stage 00:34:31.500 [Pipeline] } 00:34:31.511 [Pipeline] // dir 00:34:31.515 [Pipeline] } 00:34:31.528 [Pipeline] // wrap 00:34:31.536 [Pipeline] } 00:34:31.548 [Pipeline] // catchError 00:34:31.556 [Pipeline] stage 00:34:31.558 [Pipeline] { (Epilogue) 00:34:31.570 [Pipeline] sh 00:34:31.852 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:38.539 [Pipeline] catchError 00:34:38.541 [Pipeline] { 00:34:38.552 [Pipeline] sh 00:34:38.838 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:38.838 Artifacts sizes are good 00:34:38.848 [Pipeline] } 00:34:38.861 [Pipeline] // catchError 00:34:38.871 [Pipeline] archiveArtifacts 00:34:38.878 Archiving artifacts 00:34:38.975 [Pipeline] cleanWs 00:34:38.987 [WS-CLEANUP] Deleting project workspace... 00:34:38.987 [WS-CLEANUP] Deferred wipeout is used... 00:34:38.997 [WS-CLEANUP] done 00:34:38.999 [Pipeline] } 00:34:39.013 [Pipeline] // stage 00:34:39.018 [Pipeline] } 00:34:39.030 [Pipeline] // node 00:34:39.035 [Pipeline] End of Pipeline 00:34:39.070 Finished: SUCCESS